Dec 02 13:44:02 crc systemd[1]: Starting Kubernetes Kubelet... Dec 02 13:44:02 crc restorecon[4602]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:02 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:03 crc restorecon[4602]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 02 13:44:03 crc restorecon[4602]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 02 13:44:04 crc kubenswrapper[4625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 13:44:04 crc kubenswrapper[4625]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 02 13:44:04 crc kubenswrapper[4625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 13:44:04 crc kubenswrapper[4625]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 13:44:04 crc kubenswrapper[4625]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 02 13:44:04 crc kubenswrapper[4625]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.567900 4625 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.601961 4625 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602016 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602022 4625 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602028 4625 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602035 4625 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602040 4625 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602046 4625 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602052 4625 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602056 4625 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602063 4625 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602067 4625 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602072 4625 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602076 4625 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602083 4625 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602089 4625 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602095 4625 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602099 4625 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602104 4625 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602109 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602113 4625 feature_gate.go:330] unrecognized feature gate: Example Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602118 4625 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602122 4625 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602127 4625 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602133 4625 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602138 4625 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602142 4625 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602147 4625 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602151 4625 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602154 4625 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602159 4625 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602173 4625 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602178 4625 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602185 4625 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602192 4625 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602202 4625 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602208 4625 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602213 4625 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602218 4625 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602223 4625 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602227 4625 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602232 4625 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602238 4625 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602242 4625 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602246 4625 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602251 4625 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602255 4625 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602262 4625 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602267 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602291 4625 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602297 4625 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602301 4625 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602347 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602362 4625 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602370 4625 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602375 4625 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602381 4625 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602388 4625 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602394 4625 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602399 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602404 4625 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602409 4625 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602417 4625 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602424 4625 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602430 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602438 4625 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602443 4625 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602450 4625 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602459 4625 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602464 4625 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602470 4625 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.602475 4625 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602624 4625 flags.go:64] FLAG: --address="0.0.0.0" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602643 4625 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602655 4625 flags.go:64] FLAG: --anonymous-auth="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602664 4625 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602673 4625 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602680 4625 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602690 4625 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602697 4625 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602704 4625 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602710 4625 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602717 4625 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602726 4625 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602732 4625 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602738 4625 flags.go:64] FLAG: --cgroup-root="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602744 4625 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602751 4625 flags.go:64] FLAG: --client-ca-file="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602757 4625 flags.go:64] FLAG: --cloud-config="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602763 4625 flags.go:64] FLAG: --cloud-provider="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602769 4625 flags.go:64] FLAG: --cluster-dns="[]" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602776 4625 flags.go:64] FLAG: --cluster-domain="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602782 4625 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602788 4625 flags.go:64] FLAG: --config-dir="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602794 4625 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602801 4625 flags.go:64] FLAG: --container-log-max-files="5" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602810 4625 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602819 4625 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602826 4625 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602833 4625 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602839 4625 flags.go:64] FLAG: --contention-profiling="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602845 4625 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602852 4625 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602858 4625 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602864 4625 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602872 4625 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602878 4625 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602884 4625 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602890 4625 flags.go:64] FLAG: --enable-load-reader="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602899 4625 flags.go:64] FLAG: --enable-server="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602916 4625 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602925 4625 flags.go:64] FLAG: --event-burst="100" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602930 4625 flags.go:64] FLAG: --event-qps="50" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602936 4625 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602942 4625 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602957 4625 flags.go:64] FLAG: --eviction-hard="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602966 4625 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602971 4625 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602977 4625 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602984 4625 flags.go:64] FLAG: --eviction-soft="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602990 4625 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.602995 4625 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603000 4625 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603005 4625 flags.go:64] FLAG: --experimental-mounter-path="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603010 4625 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603016 4625 flags.go:64] FLAG: --fail-swap-on="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603022 4625 flags.go:64] FLAG: --feature-gates="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603028 4625 flags.go:64] FLAG: --file-check-frequency="20s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603034 4625 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603042 4625 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603051 4625 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603057 4625 flags.go:64] FLAG: --healthz-port="10248" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603062 4625 flags.go:64] FLAG: --help="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603068 4625 flags.go:64] FLAG: --hostname-override="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603073 4625 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603078 4625 flags.go:64] FLAG: --http-check-frequency="20s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603085 4625 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603090 4625 flags.go:64] FLAG: --image-credential-provider-config="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603095 4625 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603100 4625 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603106 4625 flags.go:64] FLAG: --image-service-endpoint="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603111 4625 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603117 4625 flags.go:64] FLAG: --kube-api-burst="100" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603122 4625 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603128 4625 flags.go:64] FLAG: --kube-api-qps="50" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603133 4625 flags.go:64] FLAG: --kube-reserved="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603139 4625 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603144 4625 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603149 4625 flags.go:64] FLAG: --kubelet-cgroups="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603154 4625 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603160 4625 flags.go:64] FLAG: --lock-file="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603165 4625 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603170 4625 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603176 4625 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603193 4625 flags.go:64] FLAG: --log-json-split-stream="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603200 4625 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603206 4625 flags.go:64] FLAG: --log-text-split-stream="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603212 4625 flags.go:64] FLAG: --logging-format="text" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603219 4625 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603225 4625 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603230 4625 flags.go:64] FLAG: --manifest-url="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603238 4625 flags.go:64] FLAG: --manifest-url-header="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603246 4625 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603251 4625 flags.go:64] FLAG: --max-open-files="1000000" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603258 4625 flags.go:64] FLAG: --max-pods="110" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603264 4625 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603270 4625 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603276 4625 flags.go:64] FLAG: --memory-manager-policy="None" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603281 4625 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603287 4625 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603293 4625 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603300 4625 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603341 4625 flags.go:64] FLAG: --node-status-max-images="50" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603348 4625 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603354 4625 flags.go:64] FLAG: --oom-score-adj="-999" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603361 4625 flags.go:64] FLAG: --pod-cidr="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603367 4625 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603380 4625 flags.go:64] FLAG: --pod-manifest-path="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603385 4625 flags.go:64] FLAG: --pod-max-pids="-1" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603390 4625 flags.go:64] FLAG: --pods-per-core="0" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603396 4625 flags.go:64] FLAG: --port="10250" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603401 4625 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603407 4625 flags.go:64] FLAG: --provider-id="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603413 4625 flags.go:64] FLAG: --qos-reserved="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603419 4625 flags.go:64] FLAG: --read-only-port="10255" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603425 4625 flags.go:64] FLAG: --register-node="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603431 4625 flags.go:64] FLAG: --register-schedulable="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603436 4625 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603448 4625 flags.go:64] FLAG: --registry-burst="10" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603454 4625 flags.go:64] FLAG: --registry-qps="5" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603461 4625 flags.go:64] FLAG: --reserved-cpus="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603468 4625 flags.go:64] FLAG: --reserved-memory="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603476 4625 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603482 4625 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603488 4625 flags.go:64] FLAG: --rotate-certificates="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603493 4625 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603500 4625 flags.go:64] FLAG: --runonce="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603505 4625 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603511 4625 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603517 4625 flags.go:64] FLAG: --seccomp-default="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603523 4625 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603528 4625 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603533 4625 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603538 4625 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603544 4625 flags.go:64] FLAG: --storage-driver-password="root" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603549 4625 flags.go:64] FLAG: --storage-driver-secure="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603554 4625 flags.go:64] FLAG: --storage-driver-table="stats" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603559 4625 flags.go:64] FLAG: --storage-driver-user="root" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603564 4625 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603569 4625 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603575 4625 flags.go:64] FLAG: --system-cgroups="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603579 4625 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603588 4625 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603595 4625 flags.go:64] FLAG: --tls-cert-file="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603600 4625 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603609 4625 flags.go:64] FLAG: --tls-min-version="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603615 4625 flags.go:64] FLAG: --tls-private-key-file="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603620 4625 flags.go:64] FLAG: --topology-manager-policy="none" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603626 4625 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603632 4625 flags.go:64] FLAG: --topology-manager-scope="container" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603637 4625 flags.go:64] FLAG: --v="2" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603646 4625 flags.go:64] FLAG: --version="false" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603657 4625 flags.go:64] FLAG: --vmodule="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603664 4625 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.603671 4625 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603828 4625 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603839 4625 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603846 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603852 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603858 4625 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603863 4625 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603868 4625 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603872 4625 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603877 4625 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603882 4625 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603887 4625 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603893 4625 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603898 4625 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603903 4625 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603909 4625 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603914 4625 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603920 4625 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603925 4625 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603931 4625 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603937 4625 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603951 4625 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603957 4625 feature_gate.go:330] unrecognized feature gate: Example Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603963 4625 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603969 4625 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603975 4625 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603981 4625 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603986 4625 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603992 4625 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.603997 4625 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604003 4625 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604009 4625 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604015 4625 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604020 4625 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604025 4625 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604030 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604035 4625 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604040 4625 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604047 4625 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604055 4625 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604060 4625 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604066 4625 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604072 4625 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604078 4625 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604083 4625 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604089 4625 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604094 4625 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604099 4625 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604105 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604110 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604115 4625 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604120 4625 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604125 4625 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604137 4625 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604143 4625 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604148 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604153 4625 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604158 4625 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604163 4625 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604170 4625 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604176 4625 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604181 4625 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604188 4625 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604195 4625 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604200 4625 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604206 4625 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604211 4625 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604216 4625 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604221 4625 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604227 4625 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604233 4625 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.604238 4625 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.604496 4625 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.614611 4625 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.614677 4625 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614782 4625 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614805 4625 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614813 4625 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614819 4625 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614827 4625 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614835 4625 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614843 4625 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614853 4625 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614860 4625 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614867 4625 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614874 4625 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614880 4625 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614886 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614892 4625 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614897 4625 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614904 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614910 4625 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614917 4625 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614924 4625 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614929 4625 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614934 4625 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614939 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614945 4625 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614950 4625 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614956 4625 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614961 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614967 4625 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614972 4625 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614978 4625 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614985 4625 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614992 4625 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.614999 4625 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615004 4625 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615011 4625 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615017 4625 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615024 4625 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615032 4625 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615039 4625 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615046 4625 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615053 4625 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615059 4625 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615148 4625 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615157 4625 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615162 4625 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615168 4625 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615174 4625 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615179 4625 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615185 4625 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615191 4625 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615198 4625 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615205 4625 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615211 4625 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615217 4625 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615223 4625 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615229 4625 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615237 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615243 4625 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615251 4625 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615256 4625 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615262 4625 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615267 4625 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615272 4625 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615278 4625 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615283 4625 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615288 4625 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615296 4625 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615301 4625 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615305 4625 feature_gate.go:330] unrecognized feature gate: Example Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615330 4625 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615336 4625 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615342 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.615352 4625 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615510 4625 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615520 4625 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615526 4625 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615532 4625 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615537 4625 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615542 4625 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615583 4625 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615589 4625 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615594 4625 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615601 4625 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615608 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615614 4625 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615619 4625 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615625 4625 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615630 4625 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615638 4625 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615645 4625 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615651 4625 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615657 4625 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615662 4625 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615667 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615672 4625 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615679 4625 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615684 4625 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615690 4625 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615694 4625 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615699 4625 feature_gate.go:330] unrecognized feature gate: Example Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615704 4625 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615709 4625 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615716 4625 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615720 4625 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615726 4625 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615731 4625 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615737 4625 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615743 4625 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615749 4625 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615755 4625 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615763 4625 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615769 4625 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615776 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615782 4625 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615786 4625 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615792 4625 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615797 4625 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615803 4625 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615808 4625 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615813 4625 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615819 4625 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615824 4625 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615829 4625 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615834 4625 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615840 4625 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615844 4625 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615849 4625 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615854 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615860 4625 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615867 4625 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615873 4625 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615878 4625 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615884 4625 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615889 4625 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615894 4625 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615900 4625 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615905 4625 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615910 4625 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615917 4625 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615922 4625 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615927 4625 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615931 4625 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615938 4625 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.615944 4625 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.615953 4625 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.616223 4625 server.go:940] "Client rotation is on, will bootstrap in background" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.621128 4625 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.621361 4625 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.622347 4625 server.go:997] "Starting client certificate rotation" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.622382 4625 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.622749 4625 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 13:08:34.498709174 +0000 UTC Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.622834 4625 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 791h24m29.875877508s for next certificate rotation Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.650608 4625 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.652334 4625 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.680277 4625 log.go:25] "Validated CRI v1 runtime API" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.698965 4625 log.go:25] "Validated CRI v1 image API" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.701168 4625 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.719986 4625 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-02-13-38-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.720086 4625 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.731223 4625 manager.go:217] Machine: {Timestamp:2025-12-02 13:44:04.729096468 +0000 UTC m=+0.691273553 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:718d7937-78fb-44b3-9ae0-1d312b093168 BootID:4d1deca0-bc51-433c-8d69-fdb0e1fb8ace Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ae:b4:e3 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ae:b4:e3 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:7d:2a:73 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:9c:a2:28 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:92:3e:d5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c8:e1:b5 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:3d:ec:8f:46:3e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:aa:9b:9a:be:e9:8c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.731429 4625 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.731578 4625 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.732381 4625 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.732589 4625 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.732624 4625 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.732835 4625 topology_manager.go:138] "Creating topology manager with none policy" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.732844 4625 container_manager_linux.go:303] "Creating device plugin manager" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.733067 4625 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.733094 4625 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.733350 4625 state_mem.go:36] "Initialized new in-memory state store" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.733433 4625 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.734132 4625 kubelet.go:418] "Attempting to sync node with API server" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.734151 4625 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.734176 4625 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.734190 4625 kubelet.go:324] "Adding apiserver pod source" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.734202 4625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.736144 4625 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.736717 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.736713 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.736908 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.736947 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.742519 4625 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.743332 4625 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.743924 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.743952 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.743960 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.743969 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.743985 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.743995 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.744004 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.744018 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.744029 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.744041 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.744056 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.744064 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.744686 4625 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.745243 4625 server.go:1280] "Started kubelet" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.746388 4625 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.746872 4625 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.746897 4625 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 02 13:44:04 crc systemd[1]: Started Kubernetes Kubelet. Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.747637 4625 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.747904 4625 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.747928 4625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.748351 4625 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:02:45.467632343 +0000 UTC Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.750252 4625 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 80h18m40.717385576s for next certificate rotation Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.750523 4625 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.750691 4625 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.750748 4625 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.751849 4625 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.753262 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="200ms" Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.770580 4625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d69e12731189e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 13:44:04.745189534 +0000 UTC m=+0.707366599,LastTimestamp:2025-12-02 13:44:04.745189534 +0000 UTC m=+0.707366599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.770925 4625 factory.go:55] Registering systemd factory Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.770955 4625 factory.go:221] Registration of the systemd container factory successfully Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.771042 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.771107 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.774033 4625 factory.go:153] Registering CRI-O factory Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.774082 4625 factory.go:221] Registration of the crio container factory successfully Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.774197 4625 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.774251 4625 factory.go:103] Registering Raw factory Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.774271 4625 manager.go:1196] Started watching for new ooms in manager Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.776084 4625 server.go:460] "Adding debug handlers to kubelet server" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.777979 4625 manager.go:319] Starting recovery of all containers Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.780370 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.780432 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.780444 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.780454 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.780466 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781493 4625 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781533 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781549 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781560 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781574 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781584 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781596 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781606 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781619 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781655 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781667 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781677 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781686 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781698 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781710 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781728 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781758 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781768 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781777 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781789 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781803 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781813 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781844 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781856 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781866 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781878 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781888 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781901 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781913 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781925 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781935 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781947 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781957 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781967 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781977 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781988 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.781998 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782010 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782019 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782033 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782043 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782055 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782068 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782080 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782090 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782100 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782110 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782120 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782137 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782149 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782159 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782169 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782180 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782192 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782202 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782213 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782223 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782233 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782243 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782255 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782265 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782277 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782286 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782295 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782304 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782330 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782339 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782349 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782358 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782370 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782379 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782389 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782399 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782410 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782420 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782451 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782463 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782477 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782491 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782501 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782511 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782522 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782533 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782543 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782553 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782565 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782576 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782587 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782598 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782608 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782619 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782632 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782644 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782656 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782667 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782678 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782689 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782701 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782713 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782724 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782740 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782753 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782765 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782778 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782793 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782807 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782858 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782877 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782893 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782908 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782924 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782936 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782947 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782958 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782969 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782980 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.782991 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783005 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783016 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783029 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783040 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783052 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783062 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783074 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783084 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783098 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783108 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783123 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783134 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783144 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783155 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783166 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783178 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783189 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783199 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783212 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783223 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783233 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783244 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783254 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783268 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783282 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783296 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783329 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783346 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783367 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783378 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783390 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783404 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783417 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783439 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783449 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783461 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783474 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783485 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783499 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783509 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783518 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783528 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783539 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783549 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783558 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783569 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783581 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783590 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783603 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783613 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783626 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783635 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783645 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783655 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783665 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783675 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783685 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783694 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783704 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783715 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783733 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783748 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783763 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783777 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783791 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783802 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783812 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783821 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783832 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783841 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783851 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783862 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783871 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783881 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783890 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783899 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783909 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783918 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783928 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783938 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783951 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783960 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783971 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783981 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.783991 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.784000 4625 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.784010 4625 reconstruct.go:97] "Volume reconstruction finished" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.784017 4625 reconciler.go:26] "Reconciler: start to sync state" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.808372 4625 manager.go:324] Recovery completed Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.844258 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.846547 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.846721 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.846734 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.847632 4625 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.847713 4625 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.847809 4625 state_mem.go:36] "Initialized new in-memory state store" Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.850859 4625 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.852103 4625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.854743 4625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.854804 4625 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.854836 4625 kubelet.go:2335] "Starting kubelet main sync loop" Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.854909 4625 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 02 13:44:04 crc kubenswrapper[4625]: W1202 13:44:04.908412 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.908605 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.920996 4625 policy_none.go:49] "None policy: Start" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.922684 4625 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 02 13:44:04 crc kubenswrapper[4625]: I1202 13:44:04.922747 4625 state_mem.go:35] "Initializing new in-memory state store" Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.951227 4625 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.953858 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="400ms" Dec 02 13:44:04 crc kubenswrapper[4625]: E1202 13:44:04.955993 4625 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.009250 4625 manager.go:334] "Starting Device Plugin manager" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.009391 4625 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.009408 4625 server.go:79] "Starting device plugin registration server" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.009860 4625 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.009882 4625 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.010202 4625 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.010495 4625 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.010506 4625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 02 13:44:05 crc kubenswrapper[4625]: E1202 13:44:05.019547 4625 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.110344 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.112120 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.112176 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.112188 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.112229 4625 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:44:05 crc kubenswrapper[4625]: E1202 13:44:05.112955 4625 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.156468 4625 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.156543 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.157676 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.157713 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.157726 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.157861 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.158077 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.158130 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.158501 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.158538 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.158547 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.158681 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.158887 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.158957 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.159158 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.159178 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.159189 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.159563 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.159587 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.159598 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.159696 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.159831 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.159874 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160239 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160263 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160273 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160582 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160619 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160636 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160656 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160729 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160744 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160768 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160935 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.160964 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.161578 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.161602 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.161612 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.161742 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.161767 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.161925 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.161944 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.161953 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.162687 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.162709 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.162720 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.291617 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.291692 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.291722 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.291744 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.291767 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.291831 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.291866 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.291906 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.292000 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.292053 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.292073 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.292091 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.292107 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.292122 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.292206 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.314078 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.316024 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.316095 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.316109 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.316146 4625 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:44:05 crc kubenswrapper[4625]: E1202 13:44:05.316818 4625 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Dec 02 13:44:05 crc kubenswrapper[4625]: E1202 13:44:05.355036 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="800ms" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393578 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393659 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393684 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393707 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393725 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393744 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393765 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393785 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393809 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393825 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393869 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393885 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393899 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393915 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.393956 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394562 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394619 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394651 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394563 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394619 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394622 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394683 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394735 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394749 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394698 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394727 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394563 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394753 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394764 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.394730 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.490994 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.498672 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.524825 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.539336 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.545561 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:44:05 crc kubenswrapper[4625]: W1202 13:44:05.623346 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-df1faaddb6e3e0cfc95774f9ac8c5f2fcfcc9c439e6d38d1b7bde8c94adfdbb9 WatchSource:0}: Error finding container df1faaddb6e3e0cfc95774f9ac8c5f2fcfcc9c439e6d38d1b7bde8c94adfdbb9: Status 404 returned error can't find the container with id df1faaddb6e3e0cfc95774f9ac8c5f2fcfcc9c439e6d38d1b7bde8c94adfdbb9 Dec 02 13:44:05 crc kubenswrapper[4625]: W1202 13:44:05.624192 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e90cc5d2356e3003f893d404f0d2e2682841de5c1e6d8ff32a1b4872805c2859 WatchSource:0}: Error finding container e90cc5d2356e3003f893d404f0d2e2682841de5c1e6d8ff32a1b4872805c2859: Status 404 returned error can't find the container with id e90cc5d2356e3003f893d404f0d2e2682841de5c1e6d8ff32a1b4872805c2859 Dec 02 13:44:05 crc kubenswrapper[4625]: W1202 13:44:05.624966 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-710d4b290926ca66845914106646ec394cc9aaa20ff9bd3d6f1763f187138c3d WatchSource:0}: Error finding container 710d4b290926ca66845914106646ec394cc9aaa20ff9bd3d6f1763f187138c3d: Status 404 returned error can't find the container with id 710d4b290926ca66845914106646ec394cc9aaa20ff9bd3d6f1763f187138c3d Dec 02 13:44:05 crc kubenswrapper[4625]: W1202 13:44:05.625445 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-b1037fdd9533f855e98091c36041eecb729d3b92ec5e981daa766da709d49c82 WatchSource:0}: Error finding container b1037fdd9533f855e98091c36041eecb729d3b92ec5e981daa766da709d49c82: Status 404 returned error can't find the container with id b1037fdd9533f855e98091c36041eecb729d3b92ec5e981daa766da709d49c82 Dec 02 13:44:05 crc kubenswrapper[4625]: W1202 13:44:05.626499 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c2d745ec2aa8b1bbd9792f1c05bb6dc028c9c8444b03e344303bb48d3e3e300a WatchSource:0}: Error finding container c2d745ec2aa8b1bbd9792f1c05bb6dc028c9c8444b03e344303bb48d3e3e300a: Status 404 returned error can't find the container with id c2d745ec2aa8b1bbd9792f1c05bb6dc028c9c8444b03e344303bb48d3e3e300a Dec 02 13:44:05 crc kubenswrapper[4625]: W1202 13:44:05.660964 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:05 crc kubenswrapper[4625]: E1202 13:44:05.661065 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.720737 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.722489 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.722554 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.722569 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.722601 4625 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:44:05 crc kubenswrapper[4625]: E1202 13:44:05.723229 4625 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.747994 4625 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.858545 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e90cc5d2356e3003f893d404f0d2e2682841de5c1e6d8ff32a1b4872805c2859"} Dec 02 13:44:05 crc kubenswrapper[4625]: W1202 13:44:05.859410 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:05 crc kubenswrapper[4625]: E1202 13:44:05.859500 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.860002 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"710d4b290926ca66845914106646ec394cc9aaa20ff9bd3d6f1763f187138c3d"} Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.860986 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"df1faaddb6e3e0cfc95774f9ac8c5f2fcfcc9c439e6d38d1b7bde8c94adfdbb9"} Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.861819 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c2d745ec2aa8b1bbd9792f1c05bb6dc028c9c8444b03e344303bb48d3e3e300a"} Dec 02 13:44:05 crc kubenswrapper[4625]: I1202 13:44:05.862615 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b1037fdd9533f855e98091c36041eecb729d3b92ec5e981daa766da709d49c82"} Dec 02 13:44:05 crc kubenswrapper[4625]: W1202 13:44:05.939206 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:05 crc kubenswrapper[4625]: E1202 13:44:05.939302 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:06 crc kubenswrapper[4625]: W1202 13:44:06.080368 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:06 crc kubenswrapper[4625]: E1202 13:44:06.080484 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:06 crc kubenswrapper[4625]: E1202 13:44:06.155662 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="1.6s" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.523702 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.526791 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.526833 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.526844 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.526875 4625 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:44:06 crc kubenswrapper[4625]: E1202 13:44:06.527581 4625 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.748097 4625 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.867648 4625 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6" exitCode=0 Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.867726 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6"} Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.867852 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.869109 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.869140 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.869150 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.869954 4625 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b8801f26dbbac834aa13c015232b37f4b46556660f713fd1e6237284ccccaba8" exitCode=0 Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.870048 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b8801f26dbbac834aa13c015232b37f4b46556660f713fd1e6237284ccccaba8"} Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.870132 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.870828 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.871687 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.871710 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.871721 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.892103 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.892147 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.892157 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.892502 4625 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb" exitCode=0 Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.892585 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.892593 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb"} Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.894302 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.894372 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.894383 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.896254 4625 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7" exitCode=0 Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.896504 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.896510 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7"} Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.897680 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.897719 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.897733 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.898582 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c"} Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.898613 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109"} Dec 02 13:44:06 crc kubenswrapper[4625]: I1202 13:44:06.898627 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d"} Dec 02 13:44:07 crc kubenswrapper[4625]: W1202 13:44:07.480771 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:07 crc kubenswrapper[4625]: E1202 13:44:07.480858 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.749155 4625 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:07 crc kubenswrapper[4625]: E1202 13:44:07.758791 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="3.2s" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.905541 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0"} Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.905613 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f"} Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.905627 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5"} Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.905599 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.906810 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.906840 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.906849 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.909042 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.909046 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376"} Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.910680 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.910817 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.910832 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.913020 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c"} Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.913051 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714"} Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.913064 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989"} Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.915677 4625 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="47573245ffa8602de462e562939ef0c22661ad9562da0fdd59ffa80faf349328" exitCode=0 Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.915784 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"47573245ffa8602de462e562939ef0c22661ad9562da0fdd59ffa80faf349328"} Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.915831 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.918407 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"23c06e059fd4ff4a08b9aad36fa53b7d5b2abcc4ea6d5b6a2157ff5cd9302d63"} Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.918467 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.919336 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.919370 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.919381 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.919745 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.919783 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:07 crc kubenswrapper[4625]: I1202 13:44:07.919796 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4625]: W1202 13:44:08.020081 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:08 crc kubenswrapper[4625]: E1202 13:44:08.020211 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.127730 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.135015 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.135062 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.135074 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.135102 4625 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:44:08 crc kubenswrapper[4625]: E1202 13:44:08.135616 4625 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.153:6443: connect: connection refused" node="crc" Dec 02 13:44:08 crc kubenswrapper[4625]: W1202 13:44:08.464851 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:08 crc kubenswrapper[4625]: E1202 13:44:08.464977 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.748093 4625 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:08 crc kubenswrapper[4625]: W1202 13:44:08.838863 4625 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.153:6443: connect: connection refused Dec 02 13:44:08 crc kubenswrapper[4625]: E1202 13:44:08.838958 4625 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.153:6443: connect: connection refused" logger="UnhandledError" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.927203 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800"} Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.927255 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971"} Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.927385 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.928328 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.928356 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.928368 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.930469 4625 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5c08179b45eb5882d3d3ca7e51675baa6bcdffd3a79305c9691f3e70f49aec4b" exitCode=0 Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.930560 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.930595 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.930684 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.931585 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5c08179b45eb5882d3d3ca7e51675baa6bcdffd3a79305c9691f3e70f49aec4b"} Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.931722 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.931821 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.938107 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.938141 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.938156 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.938904 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.938929 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.938940 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.939340 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.939361 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.939373 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.939867 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.939891 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:08 crc kubenswrapper[4625]: I1202 13:44:08.939902 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:09 crc kubenswrapper[4625]: I1202 13:44:09.784943 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:09 crc kubenswrapper[4625]: I1202 13:44:09.936885 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5e10b903673dc5b0829767ead5f0ca3f1b005838acb7283a10b95520abcec2eb"} Dec 02 13:44:09 crc kubenswrapper[4625]: I1202 13:44:09.936989 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"651ce7282e408b79af29dd349de958ea8517f4b859ea507e10c1e3eb445ef23f"} Dec 02 13:44:09 crc kubenswrapper[4625]: I1202 13:44:09.937010 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9bb7aacb336bd0321561dee249409460e2a526e84e76d1cecbf80327268d14fd"} Dec 02 13:44:09 crc kubenswrapper[4625]: I1202 13:44:09.936918 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 13:44:09 crc kubenswrapper[4625]: I1202 13:44:09.937104 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:09 crc kubenswrapper[4625]: I1202 13:44:09.938473 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:09 crc kubenswrapper[4625]: I1202 13:44:09.938504 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:09 crc kubenswrapper[4625]: I1202 13:44:09.938517 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.558166 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.558389 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.559614 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.559648 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.559657 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.943003 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.943075 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.943774 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.943876 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"94ddbbe3d4caed9ee789abe786a051f441d283c3a1eb142e2b4aa0b1043224bd"} Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.943972 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"655e71df2ad0a26ff7cf89669fb4ef565b644f9c99d3b860677b6b7a295be614"} Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.944401 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.944427 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.944440 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.944477 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.944493 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:10 crc kubenswrapper[4625]: I1202 13:44:10.944504 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4625]: I1202 13:44:11.241787 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 02 13:44:11 crc kubenswrapper[4625]: I1202 13:44:11.336404 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:11 crc kubenswrapper[4625]: I1202 13:44:11.338287 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4625]: I1202 13:44:11.338428 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4625]: I1202 13:44:11.338457 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:11 crc kubenswrapper[4625]: I1202 13:44:11.338507 4625 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:44:11 crc kubenswrapper[4625]: I1202 13:44:11.944646 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:11 crc kubenswrapper[4625]: I1202 13:44:11.945586 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:11 crc kubenswrapper[4625]: I1202 13:44:11.945650 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:11 crc kubenswrapper[4625]: I1202 13:44:11.945662 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.230222 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.230594 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.232145 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.232190 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.232204 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.593587 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.593765 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.593813 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.595442 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.595480 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.595492 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.760896 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.946450 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.946553 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.947453 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.947494 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.947506 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.947806 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.947846 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:12 crc kubenswrapper[4625]: I1202 13:44:12.947860 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:13 crc kubenswrapper[4625]: I1202 13:44:13.558261 4625 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 13:44:13 crc kubenswrapper[4625]: I1202 13:44:13.558386 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 13:44:13 crc kubenswrapper[4625]: I1202 13:44:13.789360 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:13 crc kubenswrapper[4625]: I1202 13:44:13.789562 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:13 crc kubenswrapper[4625]: I1202 13:44:13.790681 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:13 crc kubenswrapper[4625]: I1202 13:44:13.790714 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:13 crc kubenswrapper[4625]: I1202 13:44:13.790727 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:15 crc kubenswrapper[4625]: E1202 13:44:15.020078 4625 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 02 13:44:15 crc kubenswrapper[4625]: I1202 13:44:15.824881 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:15 crc kubenswrapper[4625]: I1202 13:44:15.825104 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:15 crc kubenswrapper[4625]: I1202 13:44:15.827248 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:15 crc kubenswrapper[4625]: I1202 13:44:15.827299 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:15 crc kubenswrapper[4625]: I1202 13:44:15.827348 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:17 crc kubenswrapper[4625]: I1202 13:44:17.775934 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:17 crc kubenswrapper[4625]: I1202 13:44:17.776717 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:17 crc kubenswrapper[4625]: I1202 13:44:17.778455 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:17 crc kubenswrapper[4625]: I1202 13:44:17.778511 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:17 crc kubenswrapper[4625]: I1202 13:44:17.778528 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:17 crc kubenswrapper[4625]: I1202 13:44:17.781060 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:18 crc kubenswrapper[4625]: I1202 13:44:18.000018 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:18 crc kubenswrapper[4625]: I1202 13:44:18.000916 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:18 crc kubenswrapper[4625]: I1202 13:44:18.000959 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:18 crc kubenswrapper[4625]: I1202 13:44:18.000972 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:18 crc kubenswrapper[4625]: I1202 13:44:18.004011 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:19 crc kubenswrapper[4625]: I1202 13:44:19.003360 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:19 crc kubenswrapper[4625]: I1202 13:44:19.007417 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:19 crc kubenswrapper[4625]: I1202 13:44:19.007486 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:19 crc kubenswrapper[4625]: I1202 13:44:19.007508 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:19 crc kubenswrapper[4625]: I1202 13:44:19.748577 4625 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 02 13:44:19 crc kubenswrapper[4625]: I1202 13:44:19.909443 4625 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 02 13:44:19 crc kubenswrapper[4625]: I1202 13:44:19.909538 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 13:44:19 crc kubenswrapper[4625]: I1202 13:44:19.918845 4625 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 02 13:44:19 crc kubenswrapper[4625]: I1202 13:44:19.919219 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.006782 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.009044 4625 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800" exitCode=255 Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.009112 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800"} Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.009405 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.010556 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.010669 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.010751 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.011494 4625 scope.go:117] "RemoveContainer" containerID="f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.370546 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.370778 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.371986 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.372012 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.372021 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.399423 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 02 13:44:20 crc kubenswrapper[4625]: I1202 13:44:20.596852 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.020557 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.022196 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7"} Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.022358 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.022696 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.023806 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.023839 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.023849 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.024664 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.024683 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.024694 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:21 crc kubenswrapper[4625]: I1202 13:44:21.049066 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 02 13:44:22 crc kubenswrapper[4625]: I1202 13:44:22.025187 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:22 crc kubenswrapper[4625]: I1202 13:44:22.025935 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:22 crc kubenswrapper[4625]: I1202 13:44:22.025999 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:22 crc kubenswrapper[4625]: I1202 13:44:22.026703 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:22 crc kubenswrapper[4625]: I1202 13:44:22.026743 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:22 crc kubenswrapper[4625]: I1202 13:44:22.026759 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:22 crc kubenswrapper[4625]: I1202 13:44:22.026786 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:22 crc kubenswrapper[4625]: I1202 13:44:22.026870 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:22 crc kubenswrapper[4625]: I1202 13:44:22.026884 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:22 crc kubenswrapper[4625]: I1202 13:44:22.599398 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:23 crc kubenswrapper[4625]: I1202 13:44:23.028128 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:23 crc kubenswrapper[4625]: I1202 13:44:23.029630 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:23 crc kubenswrapper[4625]: I1202 13:44:23.029678 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:23 crc kubenswrapper[4625]: I1202 13:44:23.029693 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:23 crc kubenswrapper[4625]: I1202 13:44:23.033728 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:23 crc kubenswrapper[4625]: I1202 13:44:23.559463 4625 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 13:44:23 crc kubenswrapper[4625]: I1202 13:44:23.559578 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.030401 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.031335 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.031384 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.031397 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:24 crc kubenswrapper[4625]: E1202 13:44:24.905945 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.908226 4625 trace.go:236] Trace[1867487048]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 13:44:11.800) (total time: 13107ms): Dec 02 13:44:24 crc kubenswrapper[4625]: Trace[1867487048]: ---"Objects listed" error: 13107ms (13:44:24.908) Dec 02 13:44:24 crc kubenswrapper[4625]: Trace[1867487048]: [13.107674442s] [13.107674442s] END Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.908666 4625 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.908597 4625 trace.go:236] Trace[1260095134]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 13:44:14.146) (total time: 10762ms): Dec 02 13:44:24 crc kubenswrapper[4625]: Trace[1260095134]: ---"Objects listed" error: 10762ms (13:44:24.908) Dec 02 13:44:24 crc kubenswrapper[4625]: Trace[1260095134]: [10.762086262s] [10.762086262s] END Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.908813 4625 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.910945 4625 trace.go:236] Trace[829376222]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 13:44:12.060) (total time: 12850ms): Dec 02 13:44:24 crc kubenswrapper[4625]: Trace[829376222]: ---"Objects listed" error: 12850ms (13:44:24.910) Dec 02 13:44:24 crc kubenswrapper[4625]: Trace[829376222]: [12.850608685s] [12.850608685s] END Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.910998 4625 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.911414 4625 trace.go:236] Trace[1038325910]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Dec-2025 13:44:13.408) (total time: 11503ms): Dec 02 13:44:24 crc kubenswrapper[4625]: Trace[1038325910]: ---"Objects listed" error: 11502ms (13:44:24.911) Dec 02 13:44:24 crc kubenswrapper[4625]: Trace[1038325910]: [11.503133644s] [11.503133644s] END Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.911446 4625 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 13:44:24 crc kubenswrapper[4625]: I1202 13:44:24.913768 4625 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 02 13:44:24 crc kubenswrapper[4625]: E1202 13:44:24.914331 4625 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.755033 4625 apiserver.go:52] "Watching apiserver" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.758085 4625 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.758626 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.759102 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.759190 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.759296 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.759572 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.759580 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.759755 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.760000 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.760070 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.760130 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.776449 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.776562 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.776748 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.777009 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.777068 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.777132 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.777260 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.777473 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.777878 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.832949 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.853304 4625 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.878466 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.902374 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.919734 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.919814 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.919848 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.919869 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.919887 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.919906 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.919922 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.919938 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.919953 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.919977 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920001 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920019 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920036 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920052 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920067 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920083 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920100 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920114 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920137 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920153 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920168 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920185 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920205 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920221 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920236 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920365 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920385 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920399 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920413 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920561 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920442 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920760 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920811 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920850 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920864 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920879 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920895 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920910 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920941 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920959 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.920978 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921002 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921075 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921097 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921117 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921134 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921140 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921138 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921203 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921236 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921270 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921258 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921299 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921412 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921433 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921464 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921487 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921519 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921544 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921573 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921594 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921616 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921637 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921659 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921680 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921699 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921718 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921734 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921754 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921771 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921817 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921832 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921850 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921871 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921895 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921918 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921938 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921957 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921978 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922006 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922030 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922048 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922068 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922088 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922105 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922121 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922138 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922158 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922178 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922196 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922213 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922232 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922251 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922268 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922284 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922303 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922342 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922391 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922408 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922427 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922445 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922471 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922588 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922608 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922629 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922646 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922664 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922681 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922699 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922716 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922736 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922755 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922837 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922857 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922872 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922902 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922920 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922937 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922956 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922976 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922997 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923015 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923033 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923052 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923070 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923086 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923105 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923121 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923138 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923156 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923175 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923193 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923213 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923234 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923250 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923269 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923288 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923309 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923349 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923368 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923384 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923403 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923423 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923441 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923462 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923480 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923497 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923515 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923531 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923550 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923566 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923590 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923612 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923631 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923647 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923666 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923689 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923710 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923730 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923750 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923778 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923805 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923823 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923841 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923859 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923878 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923894 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923913 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923931 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923952 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923973 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923992 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924010 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924028 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924046 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924071 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924089 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924109 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924128 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924150 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924176 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924198 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924222 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924243 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924270 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924292 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924443 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924467 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924486 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924503 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924523 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924541 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924565 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924586 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924604 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924625 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924643 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924661 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924679 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924698 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924717 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924734 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924751 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924806 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924845 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924868 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924898 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924925 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924950 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924979 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925008 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925035 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925067 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925096 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925121 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925146 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925175 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925288 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925311 4625 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925346 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925361 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925376 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925389 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925404 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921461 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921505 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921650 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921629 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.921953 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922026 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922157 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922211 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922399 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922486 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922580 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922633 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.922972 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923000 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923299 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923342 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923469 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923940 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.923996 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924103 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924741 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.924743 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925147 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925476 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.925567 4625 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.940286 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:26.440260183 +0000 UTC m=+22.402437258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.940361 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.940583 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.940820 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.940859 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.941084 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.941361 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.941680 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.941851 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.941968 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.942261 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.942412 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.942600 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.942762 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.943284 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.946595 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.947071 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.947853 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925622 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925747 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925876 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.925902 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.926047 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.926124 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.927197 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.929876 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.929931 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.930089 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.930767 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.930985 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.931202 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.931221 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.931693 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.932045 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.932409 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.932700 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.932960 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.933211 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.933947 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.933970 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.934647 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.934775 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.934792 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.935343 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.935372 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.935630 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.935687 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.935876 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.936016 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.936510 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.937146 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.937546 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.937829 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.937949 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.938081 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.938446 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.938720 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.938903 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.939053 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.939308 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.939455 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.939583 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.939857 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.939959 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.948743 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.951090 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.951894 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.953236 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.953508 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.954023 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.955539 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.956186 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.956303 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.959750 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.959921 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:26.459893279 +0000 UTC m=+22.422070354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.960033 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.960243 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.960466 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.961115 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.961656 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.961775 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.961964 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.962143 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.962288 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.962368 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.962584 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.963479 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.961707 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.964028 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.964075 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.964192 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.964305 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.964908 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.966649 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.965386 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.965670 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.965957 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.966244 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.966637 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.967033 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.967359 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.967456 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.967522 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.968070 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.968459 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.968734 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.965101 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.969903 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.970599 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.971190 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.971357 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.971420 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.971513 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.971615 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.971657 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.972186 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.972441 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.972348 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.972503 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.972526 4625 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.972805 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.973789 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.974477 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.975839 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.977001 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.977046 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.977106 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.977450 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.978200 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.978621 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.979252 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.979957 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.980286 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.980498 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.981054 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.981953 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.982348 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.982388 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.985188 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.988237 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.988307 4625 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.993270 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.993289 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:26.493269261 +0000 UTC m=+22.455446336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.991414 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.992840 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.990059 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.992630 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.993171 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.979084 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.988470 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.994061 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.994359 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.997723 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.997935 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.998377 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: I1202 13:44:25.998808 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.999466 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.999489 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.999503 4625 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:25 crc kubenswrapper[4625]: E1202 13:44:25.999558 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:26.499542869 +0000 UTC m=+22.461719934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.003704 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.005750 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.006364 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.006715 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.010003 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.013717 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.013901 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.014009 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.014146 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.014687 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.014877 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.017537 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.017702 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.019781 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.020533 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.020621 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.021663 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.022142 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.023022 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.025957 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026037 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026118 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026133 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026144 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026154 4625 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026165 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026174 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026184 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026193 4625 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026203 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026214 4625 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026225 4625 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026268 4625 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026281 4625 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026293 4625 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026302 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026334 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026344 4625 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026353 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026362 4625 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026370 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026381 4625 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026433 4625 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026443 4625 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026452 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026462 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026474 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026483 4625 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026502 4625 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026511 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026524 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026533 4625 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026544 4625 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026553 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026565 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026575 4625 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026586 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026595 4625 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026604 4625 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026612 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026620 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026629 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026638 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026647 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026658 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026666 4625 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026677 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026686 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026695 4625 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026704 4625 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026713 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026722 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026731 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026740 4625 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026751 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026762 4625 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026774 4625 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026785 4625 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026796 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026807 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026819 4625 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026829 4625 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026840 4625 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026848 4625 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026857 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026866 4625 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026875 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026885 4625 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026894 4625 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026903 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026912 4625 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026921 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026930 4625 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026939 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026948 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026957 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026967 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026977 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026987 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.026996 4625 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027006 4625 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027016 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027024 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027032 4625 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027044 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027052 4625 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027060 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027069 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027077 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027085 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027093 4625 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027104 4625 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027114 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027125 4625 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027135 4625 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027146 4625 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027157 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027170 4625 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027183 4625 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027161 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027195 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027255 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027281 4625 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027297 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027301 4625 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027368 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027385 4625 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027396 4625 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027407 4625 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027420 4625 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027429 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027439 4625 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027449 4625 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027458 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027468 4625 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027478 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027487 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027499 4625 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027509 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027520 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027530 4625 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027541 4625 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027552 4625 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027562 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027572 4625 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027581 4625 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027591 4625 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027605 4625 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027618 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027630 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027641 4625 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027650 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027660 4625 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027670 4625 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027681 4625 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027692 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027703 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027713 4625 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027723 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027734 4625 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027743 4625 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027754 4625 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027763 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027773 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027784 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027794 4625 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027809 4625 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027819 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027830 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027843 4625 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027854 4625 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027864 4625 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027875 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027890 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027901 4625 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027911 4625 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027924 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027934 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027944 4625 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027954 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027964 4625 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027974 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027984 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.027996 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028006 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028017 4625 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028026 4625 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028037 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028048 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028059 4625 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028070 4625 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028081 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028092 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028107 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028120 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028130 4625 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028140 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028152 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028163 4625 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028176 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028190 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028205 4625 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028218 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028229 4625 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028240 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028253 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028268 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028285 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.028300 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.044303 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.045890 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.046711 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.046815 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.046909 4625 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.047062 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:26.547029159 +0000 UTC m=+22.509206234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.066170 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.069206 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.072446 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.082041 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.095383 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.116037 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.133560 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.133606 4625 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.535883 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.536218 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:27.536189723 +0000 UTC m=+23.498366798 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.536328 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.536369 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.536387 4625 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.536435 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:27.536423369 +0000 UTC m=+23.498600444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.536544 4625 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.536625 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:27.536602944 +0000 UTC m=+23.498780019 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.536824 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.536981 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.537007 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.537022 4625 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.537074 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:27.537062717 +0000 UTC m=+23.499239792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.637919 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.638077 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.638092 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.638106 4625 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.638150 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:27.63813843 +0000 UTC m=+23.600315505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.642993 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nqfkd"] Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.643368 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nqfkd" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.645271 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-c6d9f"] Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.645846 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.647070 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.647199 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.649531 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.656059 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.656660 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.656708 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.656805 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.657663 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.683408 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.698405 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.718654 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.737165 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.738377 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d911ea35-69e2-4943-999e-389a961ce243-rootfs\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.738410 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d911ea35-69e2-4943-999e-389a961ce243-mcd-auth-proxy-config\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.738443 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46c2g\" (UniqueName: \"kubernetes.io/projected/815210e5-991f-4471-b687-6565a8751ba3-kube-api-access-46c2g\") pod \"node-resolver-nqfkd\" (UID: \"815210e5-991f-4471-b687-6565a8751ba3\") " pod="openshift-dns/node-resolver-nqfkd" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.738465 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/815210e5-991f-4471-b687-6565a8751ba3-hosts-file\") pod \"node-resolver-nqfkd\" (UID: \"815210e5-991f-4471-b687-6565a8751ba3\") " pod="openshift-dns/node-resolver-nqfkd" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.738490 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdr42\" (UniqueName: \"kubernetes.io/projected/d911ea35-69e2-4943-999e-389a961ce243-kube-api-access-rdr42\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.738520 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d911ea35-69e2-4943-999e-389a961ce243-proxy-tls\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.749029 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.760103 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.769161 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.780431 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.789970 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.817124 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.828818 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.839861 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdr42\" (UniqueName: \"kubernetes.io/projected/d911ea35-69e2-4943-999e-389a961ce243-kube-api-access-rdr42\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.839915 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d911ea35-69e2-4943-999e-389a961ce243-proxy-tls\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.839950 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d911ea35-69e2-4943-999e-389a961ce243-rootfs\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.839968 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d911ea35-69e2-4943-999e-389a961ce243-mcd-auth-proxy-config\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.839990 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46c2g\" (UniqueName: \"kubernetes.io/projected/815210e5-991f-4471-b687-6565a8751ba3-kube-api-access-46c2g\") pod \"node-resolver-nqfkd\" (UID: \"815210e5-991f-4471-b687-6565a8751ba3\") " pod="openshift-dns/node-resolver-nqfkd" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.840029 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/815210e5-991f-4471-b687-6565a8751ba3-hosts-file\") pod \"node-resolver-nqfkd\" (UID: \"815210e5-991f-4471-b687-6565a8751ba3\") " pod="openshift-dns/node-resolver-nqfkd" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.840107 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/815210e5-991f-4471-b687-6565a8751ba3-hosts-file\") pod \"node-resolver-nqfkd\" (UID: \"815210e5-991f-4471-b687-6565a8751ba3\") " pod="openshift-dns/node-resolver-nqfkd" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.840453 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d911ea35-69e2-4943-999e-389a961ce243-rootfs\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.840894 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d911ea35-69e2-4943-999e-389a961ce243-mcd-auth-proxy-config\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.841794 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.843557 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d911ea35-69e2-4943-999e-389a961ce243-proxy-tls\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.851979 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.856781 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.856904 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.856959 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdr42\" (UniqueName: \"kubernetes.io/projected/d911ea35-69e2-4943-999e-389a961ce243-kube-api-access-rdr42\") pod \"machine-config-daemon-c6d9f\" (UID: \"d911ea35-69e2-4943-999e-389a961ce243\") " pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.857173 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:26 crc kubenswrapper[4625]: E1202 13:44:26.857286 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.859508 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.860286 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.861645 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.862372 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.862746 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.863569 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.864106 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.864780 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.865997 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.866698 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.867795 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.868727 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.870024 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.870021 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.870643 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.870937 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46c2g\" (UniqueName: \"kubernetes.io/projected/815210e5-991f-4471-b687-6565a8751ba3-kube-api-access-46c2g\") pod \"node-resolver-nqfkd\" (UID: \"815210e5-991f-4471-b687-6565a8751ba3\") " pod="openshift-dns/node-resolver-nqfkd" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.871277 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.873056 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.873594 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.874297 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.874696 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.875296 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.876776 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.877192 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.878138 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.878578 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.879703 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.880086 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.881388 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.882169 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.882798 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.887357 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.888367 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.889106 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.889613 4625 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.889717 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.891160 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.893050 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.893492 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.895044 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.895357 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.896065 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.896605 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.897620 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.898237 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.899033 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.899659 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.900739 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.901487 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.902426 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.902963 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.903883 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.904624 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.905515 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.905960 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.906818 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.907481 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.908085 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.908978 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.966726 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nqfkd" Dec 02 13:44:26 crc kubenswrapper[4625]: I1202 13:44:26.970919 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:44:26 crc kubenswrapper[4625]: W1202 13:44:26.994487 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod815210e5_991f_4471_b687_6565a8751ba3.slice/crio-eecaf874b004c5f1152f66f5baac67396655b414faf313ddf01916a5b6e8aca2 WatchSource:0}: Error finding container eecaf874b004c5f1152f66f5baac67396655b414faf313ddf01916a5b6e8aca2: Status 404 returned error can't find the container with id eecaf874b004c5f1152f66f5baac67396655b414faf313ddf01916a5b6e8aca2 Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.028982 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lnf62"] Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.029299 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.030624 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lslqf"] Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.031430 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.038986 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.039358 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.039473 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-4njgt"] Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.039590 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.039929 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.040036 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.040072 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.040219 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.040264 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.040307 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.040424 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.040811 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.043604 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.046717 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.051025 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3f35ba1bd70ba72c1059dfd6db94ce4c3590696214de12fd53395ec7f5ea9f93"} Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.051554 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.051782 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.054022 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"e1ced35ceaf17ea8d5103fb191a1c4de78d2c9864827e72fd453c020511296f9"} Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.054810 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nqfkd" event={"ID":"815210e5-991f-4471-b687-6565a8751ba3","Type":"ContainerStarted","Data":"eecaf874b004c5f1152f66f5baac67396655b414faf313ddf01916a5b6e8aca2"} Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.061467 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29"} Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.061535 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0"} Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.061552 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3fcb697d26e1ec17e0d801a1b44d921720ddd8d71c7bdf0f29fb292366060c49"} Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.066109 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851"} Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.066167 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8868752d9a957a5cee7db7bd42864eea8253170b2bc40833aac8d3b339e99c9f"} Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.073660 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.110081 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.126037 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.142668 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.143935 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-etc-kubernetes\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.143993 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3810fa9-85cb-4c38-a835-57f56463ff66-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144016 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-systemd-units\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144035 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-netns\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144055 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-env-overrides\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144072 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9tnx\" (UniqueName: \"kubernetes.io/projected/df437b8d-61b5-41ea-8f56-d5472e444b23-kube-api-access-b9tnx\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144090 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-cni-dir\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144112 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-bin\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144129 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df437b8d-61b5-41ea-8f56-d5472e444b23-ovn-node-metrics-cert\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144165 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-socket-dir-parent\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144182 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-conf-dir\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144214 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-openvswitch\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144230 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-system-cni-dir\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144245 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-run-netns\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144261 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-daemon-config\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144287 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-var-lib-openvswitch\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144340 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-cnibin\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144356 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-os-release\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144385 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-hostroot\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144400 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-slash\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144421 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-cnibin\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144437 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-os-release\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144452 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-etc-openvswitch\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144469 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144491 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224t6\" (UniqueName: \"kubernetes.io/projected/dd11bfd3-e3e2-47ac-8354-30dd684045dc-kube-api-access-224t6\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144510 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-ovn\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144537 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-var-lib-cni-bin\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144555 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-systemd\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144580 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-log-socket\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144602 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-system-cni-dir\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144624 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-node-log\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144643 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-ovn-kubernetes\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144682 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dd11bfd3-e3e2-47ac-8354-30dd684045dc-cni-binary-copy\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144699 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-var-lib-kubelet\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144718 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-config\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144732 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-script-lib\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144748 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144771 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdqd\" (UniqueName: \"kubernetes.io/projected/c3810fa9-85cb-4c38-a835-57f56463ff66-kube-api-access-pjdqd\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144819 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-run-k8s-cni-cncf-io\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144836 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-var-lib-cni-multus\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144851 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-run-multus-certs\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144868 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-kubelet\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144886 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-netd\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.144906 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3810fa9-85cb-4c38-a835-57f56463ff66-cni-binary-copy\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.161358 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.179241 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.245260 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.245833 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.245911 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-cnibin\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.245939 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-os-release\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.245948 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.245969 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-etc-openvswitch\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246024 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-224t6\" (UniqueName: \"kubernetes.io/projected/dd11bfd3-e3e2-47ac-8354-30dd684045dc-kube-api-access-224t6\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246045 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-ovn\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246044 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-etc-openvswitch\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246062 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-var-lib-cni-bin\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246079 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-systemd\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246102 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-log-socket\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246119 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-system-cni-dir\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246135 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-node-log\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246149 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-ovn-kubernetes\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246169 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dd11bfd3-e3e2-47ac-8354-30dd684045dc-cni-binary-copy\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246159 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-cnibin\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246202 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-var-lib-kubelet\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246244 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-ovn\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246271 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-var-lib-cni-bin\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246304 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-systemd\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246384 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-log-socket\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246409 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-system-cni-dir\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246432 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-node-log\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246460 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-ovn-kubernetes\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246183 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-var-lib-kubelet\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246761 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246794 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjdqd\" (UniqueName: \"kubernetes.io/projected/c3810fa9-85cb-4c38-a835-57f56463ff66-kube-api-access-pjdqd\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246835 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-config\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246858 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-script-lib\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246884 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-kubelet\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246907 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-netd\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246930 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3810fa9-85cb-4c38-a835-57f56463ff66-cni-binary-copy\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246956 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-run-k8s-cni-cncf-io\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.246978 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-var-lib-cni-multus\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247001 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-run-multus-certs\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247025 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-etc-kubernetes\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247065 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3810fa9-85cb-4c38-a835-57f56463ff66-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247093 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-systemd-units\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247096 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dd11bfd3-e3e2-47ac-8354-30dd684045dc-cni-binary-copy\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247121 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-netns\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247151 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-env-overrides\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247169 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9tnx\" (UniqueName: \"kubernetes.io/projected/df437b8d-61b5-41ea-8f56-d5472e444b23-kube-api-access-b9tnx\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247189 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-cni-dir\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247205 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-bin\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247225 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df437b8d-61b5-41ea-8f56-d5472e444b23-ovn-node-metrics-cert\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247244 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-conf-dir\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247254 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-os-release\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247277 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-socket-dir-parent\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247318 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-netns\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247338 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-openvswitch\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247349 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-run-k8s-cni-cncf-io\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247367 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-var-lib-openvswitch\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247382 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-var-lib-cni-multus\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247392 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-cnibin\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247409 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-run-multus-certs\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247415 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-os-release\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247437 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-etc-kubernetes\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247441 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-system-cni-dir\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247472 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-run-netns\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247508 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-daemon-config\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247551 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-hostroot\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247571 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-slash\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247630 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3810fa9-85cb-4c38-a835-57f56463ff66-cni-binary-copy\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247651 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-slash\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.247771 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-openvswitch\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248024 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248081 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-var-lib-openvswitch\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248118 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-cnibin\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248173 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3810fa9-85cb-4c38-a835-57f56463ff66-os-release\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248222 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-system-cni-dir\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248256 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-host-run-netns\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248461 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-env-overrides\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248489 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-config\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248660 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-systemd-units\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248670 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3810fa9-85cb-4c38-a835-57f56463ff66-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248703 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-hostroot\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248747 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-kubelet\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248754 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-netd\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248749 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-conf-dir\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248787 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-socket-dir-parent\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248795 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-bin\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.248817 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-cni-dir\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.249070 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dd11bfd3-e3e2-47ac-8354-30dd684045dc-multus-daemon-config\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.249179 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-script-lib\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.263123 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df437b8d-61b5-41ea-8f56-d5472e444b23-ovn-node-metrics-cert\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.267805 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.282756 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-224t6\" (UniqueName: \"kubernetes.io/projected/dd11bfd3-e3e2-47ac-8354-30dd684045dc-kube-api-access-224t6\") pod \"multus-lnf62\" (UID: \"dd11bfd3-e3e2-47ac-8354-30dd684045dc\") " pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.284548 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9tnx\" (UniqueName: \"kubernetes.io/projected/df437b8d-61b5-41ea-8f56-d5472e444b23-kube-api-access-b9tnx\") pod \"ovnkube-node-lslqf\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.288861 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjdqd\" (UniqueName: \"kubernetes.io/projected/c3810fa9-85cb-4c38-a835-57f56463ff66-kube-api-access-pjdqd\") pod \"multus-additional-cni-plugins-4njgt\" (UID: \"c3810fa9-85cb-4c38-a835-57f56463ff66\") " pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.300786 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.346381 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.366205 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lnf62" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.372626 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: W1202 13:44:27.381566 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd11bfd3_e3e2_47ac_8354_30dd684045dc.slice/crio-c5782cd9773be58243b3cc582193c2ecfa2685f80cac18cb59ce56b65cc05650 WatchSource:0}: Error finding container c5782cd9773be58243b3cc582193c2ecfa2685f80cac18cb59ce56b65cc05650: Status 404 returned error can't find the container with id c5782cd9773be58243b3cc582193c2ecfa2685f80cac18cb59ce56b65cc05650 Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.395720 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.395934 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4njgt" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.396126 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:27 crc kubenswrapper[4625]: W1202 13:44:27.416680 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3810fa9_85cb_4c38_a835_57f56463ff66.slice/crio-332d5e8161179bab2c5db8fd6ef9a9c31c01afec50d6b3e545b98ff4f465ce1e WatchSource:0}: Error finding container 332d5e8161179bab2c5db8fd6ef9a9c31c01afec50d6b3e545b98ff4f465ce1e: Status 404 returned error can't find the container with id 332d5e8161179bab2c5db8fd6ef9a9c31c01afec50d6b3e545b98ff4f465ce1e Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.424949 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.453329 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.479980 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.509988 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.529578 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.550825 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.552101 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.552219 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.552259 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.552373 4625 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.552427 4625 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.552572 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:29.552358884 +0000 UTC m=+25.514535959 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.552647 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:29.55260701 +0000 UTC m=+25.514784085 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.552726 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.552793 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.552820 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.552839 4625 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.552797 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:29.552782335 +0000 UTC m=+25.514959580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.552880 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:29.552870667 +0000 UTC m=+25.515047742 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.579796 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.594122 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.611670 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.626939 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.653459 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.653683 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.653717 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.653731 4625 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.653812 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:29.653789277 +0000 UTC m=+25.615966352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:27 crc kubenswrapper[4625]: I1202 13:44:27.862094 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:27 crc kubenswrapper[4625]: E1202 13:44:27.862419 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.070600 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756" exitCode=0 Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.070712 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756"} Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.070758 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"c3081f6a7ebaa0ab2558faa495ca2f234dc502dd123503856cded86dbf775bb4"} Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.074246 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lnf62" event={"ID":"dd11bfd3-e3e2-47ac-8354-30dd684045dc","Type":"ContainerStarted","Data":"407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2"} Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.074332 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lnf62" event={"ID":"dd11bfd3-e3e2-47ac-8354-30dd684045dc","Type":"ContainerStarted","Data":"c5782cd9773be58243b3cc582193c2ecfa2685f80cac18cb59ce56b65cc05650"} Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.076428 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerStarted","Data":"4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6"} Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.076467 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerStarted","Data":"332d5e8161179bab2c5db8fd6ef9a9c31c01afec50d6b3e545b98ff4f465ce1e"} Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.078885 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b"} Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.078922 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2"} Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.080254 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nqfkd" event={"ID":"815210e5-991f-4471-b687-6565a8751ba3","Type":"ContainerStarted","Data":"b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359"} Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.098997 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.121930 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.142114 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.158658 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.176389 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.191498 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.282861 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.300612 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gnnxh"] Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.301436 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.305516 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.305749 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.305792 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.305975 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.310715 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.340736 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.362665 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8mkc\" (UniqueName: \"kubernetes.io/projected/98490ada-9405-4703-8fef-4211d5b99400-kube-api-access-c8mkc\") pod \"node-ca-gnnxh\" (UID: \"98490ada-9405-4703-8fef-4211d5b99400\") " pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.362726 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/98490ada-9405-4703-8fef-4211d5b99400-host\") pod \"node-ca-gnnxh\" (UID: \"98490ada-9405-4703-8fef-4211d5b99400\") " pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.362759 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/98490ada-9405-4703-8fef-4211d5b99400-serviceca\") pod \"node-ca-gnnxh\" (UID: \"98490ada-9405-4703-8fef-4211d5b99400\") " pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.366407 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.388914 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.413355 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.443911 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.463413 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.463979 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/98490ada-9405-4703-8fef-4211d5b99400-serviceca\") pod \"node-ca-gnnxh\" (UID: \"98490ada-9405-4703-8fef-4211d5b99400\") " pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.464076 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8mkc\" (UniqueName: \"kubernetes.io/projected/98490ada-9405-4703-8fef-4211d5b99400-kube-api-access-c8mkc\") pod \"node-ca-gnnxh\" (UID: \"98490ada-9405-4703-8fef-4211d5b99400\") " pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.464104 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/98490ada-9405-4703-8fef-4211d5b99400-host\") pod \"node-ca-gnnxh\" (UID: \"98490ada-9405-4703-8fef-4211d5b99400\") " pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.464193 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/98490ada-9405-4703-8fef-4211d5b99400-host\") pod \"node-ca-gnnxh\" (UID: \"98490ada-9405-4703-8fef-4211d5b99400\") " pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.465352 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/98490ada-9405-4703-8fef-4211d5b99400-serviceca\") pod \"node-ca-gnnxh\" (UID: \"98490ada-9405-4703-8fef-4211d5b99400\") " pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.485344 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.499459 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8mkc\" (UniqueName: \"kubernetes.io/projected/98490ada-9405-4703-8fef-4211d5b99400-kube-api-access-c8mkc\") pod \"node-ca-gnnxh\" (UID: \"98490ada-9405-4703-8fef-4211d5b99400\") " pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.505735 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.532332 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.557814 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.578645 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.593719 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.609648 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.643579 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gnnxh" Dec 02 13:44:28 crc kubenswrapper[4625]: W1202 13:44:28.667465 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98490ada_9405_4703_8fef_4211d5b99400.slice/crio-625e493406af59e65ee5524f11e6f0d1ee45676680f05b889c6a8124630a0411 WatchSource:0}: Error finding container 625e493406af59e65ee5524f11e6f0d1ee45676680f05b889c6a8124630a0411: Status 404 returned error can't find the container with id 625e493406af59e65ee5524f11e6f0d1ee45676680f05b889c6a8124630a0411 Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.681936 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.720362 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.749016 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.777149 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:28Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.858397 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:28 crc kubenswrapper[4625]: E1202 13:44:28.858563 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:28 crc kubenswrapper[4625]: I1202 13:44:28.858992 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:28 crc kubenswrapper[4625]: E1202 13:44:28.859042 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.084743 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gnnxh" event={"ID":"98490ada-9405-4703-8fef-4211d5b99400","Type":"ContainerStarted","Data":"625e493406af59e65ee5524f11e6f0d1ee45676680f05b889c6a8124630a0411"} Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.086095 4625 generic.go:334] "Generic (PLEG): container finished" podID="c3810fa9-85cb-4c38-a835-57f56463ff66" containerID="4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6" exitCode=0 Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.086143 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerDied","Data":"4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6"} Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.090711 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35"} Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.090745 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d"} Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.092349 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415"} Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.131345 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.198328 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.222032 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.240870 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.290657 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.342657 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.397938 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.415163 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.433933 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.457856 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.479933 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.503229 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.538336 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.571082 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.579567 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.579752 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.579818 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:33.579795804 +0000 UTC m=+29.541972879 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.579856 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.579890 4625 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.579936 4625 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.580022 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.580035 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.580046 4625 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.579940 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:33.579927788 +0000 UTC m=+29.542104863 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.580076 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:33.580069081 +0000 UTC m=+29.542246156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.580088 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:33.580082762 +0000 UTC m=+29.542259837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.579895 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.594639 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.612446 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.631664 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.659859 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.681423 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.681625 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.681655 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.681669 4625 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.681739 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:33.681718441 +0000 UTC m=+29.643895516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.756870 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.803347 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.832263 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.854168 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.855277 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:29 crc kubenswrapper[4625]: E1202 13:44:29.855481 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.898542 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.923786 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.939354 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:29 crc kubenswrapper[4625]: I1202 13:44:29.957400 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:29Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.106776 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gnnxh" event={"ID":"98490ada-9405-4703-8fef-4211d5b99400","Type":"ContainerStarted","Data":"4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11"} Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.111526 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766"} Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.113828 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerStarted","Data":"e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f"} Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.129769 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.145092 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.159915 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.172980 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.183605 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.196534 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.212000 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.226078 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.240284 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.256699 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.273063 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.333692 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.353516 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.380283 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.421930 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.454120 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.475207 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.546399 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.569501 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.575383 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.575858 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.581105 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.592910 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.609244 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.747023 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.764371 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.883737 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:30 crc kubenswrapper[4625]: E1202 13:44:30.883946 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.884369 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:30 crc kubenswrapper[4625]: E1202 13:44:30.884457 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.893126 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.953249 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:30 crc kubenswrapper[4625]: I1202 13:44:30.979462 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:30Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.019271 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.073398 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.095048 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.115199 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.120001 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb"} Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.120079 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f"} Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.136412 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.159447 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.187628 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.211924 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.234818 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.276173 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.295597 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.312058 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.316752 4625 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.319456 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.319498 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.319511 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.319681 4625 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.328179 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.329634 4625 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.329975 4625 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.331282 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.331306 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.331335 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.331358 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.331370 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:31Z","lastTransitionTime":"2025-12-02T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.343919 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: E1202 13:44:31.355854 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.361208 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.361241 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.361252 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.361270 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.361281 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:31Z","lastTransitionTime":"2025-12-02T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:31 crc kubenswrapper[4625]: E1202 13:44:31.379003 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.392425 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.392474 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.392485 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.392506 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.392520 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:31Z","lastTransitionTime":"2025-12-02T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:31 crc kubenswrapper[4625]: E1202 13:44:31.407673 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.412900 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.412978 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.412992 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.413019 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.413034 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:31Z","lastTransitionTime":"2025-12-02T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:31 crc kubenswrapper[4625]: E1202 13:44:31.439056 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.445171 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.445242 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.445256 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.445276 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.445288 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:31Z","lastTransitionTime":"2025-12-02T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:31 crc kubenswrapper[4625]: E1202 13:44:31.461802 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:31Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:31 crc kubenswrapper[4625]: E1202 13:44:31.462147 4625 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.465529 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.465611 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.465630 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.465685 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.465708 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:31Z","lastTransitionTime":"2025-12-02T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.569082 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.569125 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.569136 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.569159 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.569174 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:31Z","lastTransitionTime":"2025-12-02T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.672360 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.672433 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.672471 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.672494 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.672505 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:31Z","lastTransitionTime":"2025-12-02T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.980892 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:31 crc kubenswrapper[4625]: E1202 13:44:31.981000 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.981010 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:31 crc kubenswrapper[4625]: E1202 13:44:31.981091 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.994942 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.994989 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.995001 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.995020 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:31 crc kubenswrapper[4625]: I1202 13:44:31.995033 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:31Z","lastTransitionTime":"2025-12-02T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.097710 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.097755 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.097765 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.097781 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.097792 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:32Z","lastTransitionTime":"2025-12-02T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.125205 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.199462 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.199489 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.199497 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.199510 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.199519 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:32Z","lastTransitionTime":"2025-12-02T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.302934 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.303225 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.303239 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.303255 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.303267 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:32Z","lastTransitionTime":"2025-12-02T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.405359 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.405404 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.405419 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.405437 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.405449 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:32Z","lastTransitionTime":"2025-12-02T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.508015 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.508049 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.508061 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.508077 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.508089 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:32Z","lastTransitionTime":"2025-12-02T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.610400 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.610430 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.610439 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.610456 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.610466 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:32Z","lastTransitionTime":"2025-12-02T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.713394 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.713698 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.713815 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.713910 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.714014 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:32Z","lastTransitionTime":"2025-12-02T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.765828 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.783207 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.797083 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.811589 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.816113 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.816149 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.816158 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.816187 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.816198 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:32Z","lastTransitionTime":"2025-12-02T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.824901 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.841413 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.854599 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.855668 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:32 crc kubenswrapper[4625]: E1202 13:44:32.855770 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.869197 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.881979 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.896714 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.911057 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.918367 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.918399 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.918409 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.918422 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.918432 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:32Z","lastTransitionTime":"2025-12-02T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.926519 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.937596 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.954207 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:32 crc kubenswrapper[4625]: I1202 13:44:32.966870 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.022630 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.022665 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.022676 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.022691 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.022702 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:33Z","lastTransitionTime":"2025-12-02T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.125488 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.125536 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.125547 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.125567 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.125577 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:33Z","lastTransitionTime":"2025-12-02T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.138897 4625 generic.go:334] "Generic (PLEG): container finished" podID="c3810fa9-85cb-4c38-a835-57f56463ff66" containerID="e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f" exitCode=0 Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.138951 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerDied","Data":"e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f"} Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.165813 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.180389 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.193904 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.213873 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.226225 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.228892 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.228926 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.228939 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.228955 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.228968 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:33Z","lastTransitionTime":"2025-12-02T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.238939 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.250838 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.268864 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.281462 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.294798 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.312776 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.328866 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.337655 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.337710 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.337726 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.337746 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.337759 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:33Z","lastTransitionTime":"2025-12-02T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.342837 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.356282 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.439926 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.439982 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.440004 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.440025 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.440042 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:33Z","lastTransitionTime":"2025-12-02T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.542732 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.542768 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.542778 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.542796 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.542806 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:33Z","lastTransitionTime":"2025-12-02T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.597203 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.597347 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.597395 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.597446 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.597482 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:41.597439159 +0000 UTC m=+37.559616244 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.597623 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.597651 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.597666 4625 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.597683 4625 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.597715 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:41.597702637 +0000 UTC m=+37.559879722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.597622 4625 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.597755 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:41.597733808 +0000 UTC m=+37.559910953 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.597809 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:41.597791269 +0000 UTC m=+37.559968344 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.645653 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.645709 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.645725 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.645747 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.645764 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:33Z","lastTransitionTime":"2025-12-02T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.698685 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.698954 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.698991 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.699006 4625 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.699060 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:41.699043987 +0000 UTC m=+37.661221062 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.748417 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.748456 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.748468 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.748486 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.748499 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:33Z","lastTransitionTime":"2025-12-02T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.851039 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.851086 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.851099 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.851116 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.851128 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:33Z","lastTransitionTime":"2025-12-02T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.855613 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.855616 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.855718 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:33 crc kubenswrapper[4625]: E1202 13:44:33.855775 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.954057 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.954095 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.954106 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.954121 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:33 crc kubenswrapper[4625]: I1202 13:44:33.954141 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:33Z","lastTransitionTime":"2025-12-02T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.056923 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.056990 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.057002 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.057024 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.057038 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:34Z","lastTransitionTime":"2025-12-02T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.145142 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.148407 4625 generic.go:334] "Generic (PLEG): container finished" podID="c3810fa9-85cb-4c38-a835-57f56463ff66" containerID="3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f" exitCode=0 Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.148439 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerDied","Data":"3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.159823 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.159860 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.159890 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.159909 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.159921 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:34Z","lastTransitionTime":"2025-12-02T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.172074 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.181980 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.191031 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.209419 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.223075 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.237201 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.249981 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.262702 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.263754 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.263782 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.263794 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.263811 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.263825 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:34Z","lastTransitionTime":"2025-12-02T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.274164 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.284282 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.302008 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.315098 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.329116 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.341803 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.366761 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.366841 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.366853 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.366875 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.366904 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:34Z","lastTransitionTime":"2025-12-02T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.468884 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.469078 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.469173 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.469262 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.469378 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:34Z","lastTransitionTime":"2025-12-02T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.572073 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.572132 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.572142 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.572158 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.572192 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:34Z","lastTransitionTime":"2025-12-02T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.675281 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.675371 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.675391 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.675414 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.675432 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:34Z","lastTransitionTime":"2025-12-02T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.777509 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.777571 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.777588 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.777608 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.777641 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:34Z","lastTransitionTime":"2025-12-02T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.855390 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:34 crc kubenswrapper[4625]: E1202 13:44:34.855541 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.869366 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.885102 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.887036 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.887111 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.887127 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.887144 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.887156 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:34Z","lastTransitionTime":"2025-12-02T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.900524 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.913410 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.925028 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.949108 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.961676 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.975538 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.989044 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.989298 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.989418 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.989518 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.989609 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:34Z","lastTransitionTime":"2025-12-02T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:34 crc kubenswrapper[4625]: I1202 13:44:34.991657 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.005225 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.021924 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.038551 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.052266 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.061689 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.128830 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.129050 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.129158 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.129258 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.129363 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:35Z","lastTransitionTime":"2025-12-02T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.152891 4625 generic.go:334] "Generic (PLEG): container finished" podID="c3810fa9-85cb-4c38-a835-57f56463ff66" containerID="8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987" exitCode=0 Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.152933 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerDied","Data":"8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987"} Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.178785 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.189071 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.203565 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.217373 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.229434 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.231030 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.231129 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.231141 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.231157 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.231168 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:35Z","lastTransitionTime":"2025-12-02T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.239751 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.248427 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.266597 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.279952 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.294159 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.309075 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.320097 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.334172 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.334220 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.334230 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.334246 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.334256 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:35Z","lastTransitionTime":"2025-12-02T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.335992 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.353806 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.438103 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.438157 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.438168 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.438193 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.438211 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:35Z","lastTransitionTime":"2025-12-02T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.541354 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.541416 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.541433 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.541459 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.541472 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:35Z","lastTransitionTime":"2025-12-02T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.645104 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.645155 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.645171 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.645192 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.645214 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:35Z","lastTransitionTime":"2025-12-02T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.749129 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.749181 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.749202 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.749225 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.749242 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:35Z","lastTransitionTime":"2025-12-02T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.855104 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.855132 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:35 crc kubenswrapper[4625]: E1202 13:44:35.855597 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:35 crc kubenswrapper[4625]: E1202 13:44:35.855755 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.864672 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.864732 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.864741 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.864755 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.864764 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:35Z","lastTransitionTime":"2025-12-02T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.967656 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.967688 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.967697 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.967711 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:35 crc kubenswrapper[4625]: I1202 13:44:35.967721 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:35Z","lastTransitionTime":"2025-12-02T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.070532 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.070607 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.070623 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.070678 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.070693 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:36Z","lastTransitionTime":"2025-12-02T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.158924 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.159894 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.159946 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.179004 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.179040 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.179049 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.179063 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.179074 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:36Z","lastTransitionTime":"2025-12-02T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.180913 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerStarted","Data":"052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.184529 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.198435 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.212225 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.215441 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.216200 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.226591 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.238669 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.253104 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.267199 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.277300 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.281013 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.281034 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.281041 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.281054 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.281063 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:36Z","lastTransitionTime":"2025-12-02T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.288979 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.302419 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.361844 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.382820 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.382849 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.382859 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.382875 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.382886 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:36Z","lastTransitionTime":"2025-12-02T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.401039 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.423822 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.439346 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.462138 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.473881 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.490012 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.490045 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.490056 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.490072 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.490083 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:36Z","lastTransitionTime":"2025-12-02T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.497489 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.516650 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.531945 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.550401 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.566502 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.592881 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.592919 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.592930 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.592964 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.592975 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:36Z","lastTransitionTime":"2025-12-02T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.616384 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.630581 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.649909 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.665281 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.677101 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.688190 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.694610 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.694642 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.694654 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.694673 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.694685 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:36Z","lastTransitionTime":"2025-12-02T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.700702 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:36Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.796637 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.796665 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.796673 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.796697 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.796706 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:36Z","lastTransitionTime":"2025-12-02T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.857907 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:36 crc kubenswrapper[4625]: E1202 13:44:36.858036 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.898980 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.899015 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.899026 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.899060 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:36 crc kubenswrapper[4625]: I1202 13:44:36.899073 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:36Z","lastTransitionTime":"2025-12-02T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.001927 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.001987 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.001999 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.002016 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.002030 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:37Z","lastTransitionTime":"2025-12-02T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.104811 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.104866 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.104879 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.104894 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.104903 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:37Z","lastTransitionTime":"2025-12-02T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.183717 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.207500 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.207542 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.207564 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.207581 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.207593 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:37Z","lastTransitionTime":"2025-12-02T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.310689 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.310743 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.310761 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.310782 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.310796 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:37Z","lastTransitionTime":"2025-12-02T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.413898 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.414283 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.414296 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.414350 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.414367 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:37Z","lastTransitionTime":"2025-12-02T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.517326 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.517387 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.517407 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.517430 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.517443 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:37Z","lastTransitionTime":"2025-12-02T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.620642 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.620690 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.620701 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.620718 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.620731 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:37Z","lastTransitionTime":"2025-12-02T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.724162 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.724209 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.724224 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.724241 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.724255 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:37Z","lastTransitionTime":"2025-12-02T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.879010 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.879048 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.879125 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:37 crc kubenswrapper[4625]: E1202 13:44:37.879271 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:37 crc kubenswrapper[4625]: E1202 13:44:37.879436 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:37 crc kubenswrapper[4625]: E1202 13:44:37.879510 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.880722 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.880769 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.880778 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.880792 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.880812 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:37Z","lastTransitionTime":"2025-12-02T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.986580 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.986613 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.986625 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.986641 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:37 crc kubenswrapper[4625]: I1202 13:44:37.986651 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:37Z","lastTransitionTime":"2025-12-02T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.089578 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.089613 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.089622 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.089639 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.089648 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:38Z","lastTransitionTime":"2025-12-02T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.191465 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.191516 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.191525 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.191571 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.191584 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:38Z","lastTransitionTime":"2025-12-02T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.192664 4625 generic.go:334] "Generic (PLEG): container finished" podID="c3810fa9-85cb-4c38-a835-57f56463ff66" containerID="052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7" exitCode=0 Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.192770 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerDied","Data":"052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7"} Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.192830 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.209528 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.227487 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.249380 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.267965 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.280133 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.292355 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.293863 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.293904 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.293915 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.293933 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.293948 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:38Z","lastTransitionTime":"2025-12-02T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.311528 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.323111 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.336707 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.349773 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.365518 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.382085 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.394906 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.396526 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.396542 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.396558 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.396572 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.396583 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:38Z","lastTransitionTime":"2025-12-02T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.407836 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:38Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.499716 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.499776 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.499791 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.499815 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.499830 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:38Z","lastTransitionTime":"2025-12-02T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.602853 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.603138 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.603305 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.603501 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.603636 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:38Z","lastTransitionTime":"2025-12-02T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.706011 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.706048 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.706057 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.706077 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.706087 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:38Z","lastTransitionTime":"2025-12-02T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.808914 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.808968 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.808980 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.809000 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.809012 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:38Z","lastTransitionTime":"2025-12-02T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.912793 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.912867 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.912908 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.912933 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:38 crc kubenswrapper[4625]: I1202 13:44:38.912946 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:38Z","lastTransitionTime":"2025-12-02T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.016245 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.016334 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.016349 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.016375 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.016390 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:39Z","lastTransitionTime":"2025-12-02T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.120034 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.120092 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.120105 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.120129 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.120141 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:39Z","lastTransitionTime":"2025-12-02T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.200796 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerStarted","Data":"be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.217623 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.222624 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.222690 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.222706 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.222730 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.222744 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:39Z","lastTransitionTime":"2025-12-02T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.233012 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.250728 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.272798 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.290716 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.305934 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.322105 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.325498 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.325563 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.325574 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.325597 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.325611 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:39Z","lastTransitionTime":"2025-12-02T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.337282 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.357996 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.373670 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.388866 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.393867 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895"] Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.394376 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.399369 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.399618 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.408937 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.425546 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.427967 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.428010 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.428025 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.428045 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.428056 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:39Z","lastTransitionTime":"2025-12-02T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.441258 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.457080 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.470691 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.484993 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.496533 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fb167ef-23b4-4c65-bd65-a0219101b109-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.496583 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fb167ef-23b4-4c65-bd65-a0219101b109-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.496650 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fb167ef-23b4-4c65-bd65-a0219101b109-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.496676 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5p2t\" (UniqueName: \"kubernetes.io/projected/2fb167ef-23b4-4c65-bd65-a0219101b109-kube-api-access-x5p2t\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.500773 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.513417 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.526279 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.530005 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.530059 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.530069 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.530090 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.530101 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:39Z","lastTransitionTime":"2025-12-02T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.539677 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.554482 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.564976 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.587463 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.597799 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fb167ef-23b4-4c65-bd65-a0219101b109-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.598099 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5p2t\" (UniqueName: \"kubernetes.io/projected/2fb167ef-23b4-4c65-bd65-a0219101b109-kube-api-access-x5p2t\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.598264 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fb167ef-23b4-4c65-bd65-a0219101b109-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.598994 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fb167ef-23b4-4c65-bd65-a0219101b109-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.598937 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fb167ef-23b4-4c65-bd65-a0219101b109-env-overrides\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.598882 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fb167ef-23b4-4c65-bd65-a0219101b109-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.601586 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.606602 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fb167ef-23b4-4c65-bd65-a0219101b109-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.617461 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5p2t\" (UniqueName: \"kubernetes.io/projected/2fb167ef-23b4-4c65-bd65-a0219101b109-kube-api-access-x5p2t\") pod \"ovnkube-control-plane-749d76644c-cw895\" (UID: \"2fb167ef-23b4-4c65-bd65-a0219101b109\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.620074 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.633144 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.633180 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.633191 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.633208 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.633218 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:39Z","lastTransitionTime":"2025-12-02T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.636230 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.646451 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.656487 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:39Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.708026 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" Dec 02 13:44:39 crc kubenswrapper[4625]: W1202 13:44:39.722569 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fb167ef_23b4_4c65_bd65_a0219101b109.slice/crio-4455442ee247c774aca0a6aca9de2df094e7484e822d0ff427f9721b87f3e0a0 WatchSource:0}: Error finding container 4455442ee247c774aca0a6aca9de2df094e7484e822d0ff427f9721b87f3e0a0: Status 404 returned error can't find the container with id 4455442ee247c774aca0a6aca9de2df094e7484e822d0ff427f9721b87f3e0a0 Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.735807 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.735909 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.735929 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.735950 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.735965 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:39Z","lastTransitionTime":"2025-12-02T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.838818 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.838871 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.838882 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.838899 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.838911 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:39Z","lastTransitionTime":"2025-12-02T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.861163 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:39 crc kubenswrapper[4625]: E1202 13:44:39.862036 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.862142 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:39 crc kubenswrapper[4625]: E1202 13:44:39.862195 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.862239 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:39 crc kubenswrapper[4625]: E1202 13:44:39.862281 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.941969 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.942046 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.942056 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.942079 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:39 crc kubenswrapper[4625]: I1202 13:44:39.942091 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:39Z","lastTransitionTime":"2025-12-02T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.080057 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.080098 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.080107 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.080129 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.080141 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:40Z","lastTransitionTime":"2025-12-02T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.184199 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.184261 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.184273 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.184293 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.184304 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:40Z","lastTransitionTime":"2025-12-02T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.236392 4625 generic.go:334] "Generic (PLEG): container finished" podID="c3810fa9-85cb-4c38-a835-57f56463ff66" containerID="be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7" exitCode=0 Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.236458 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerDied","Data":"be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7"} Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.239611 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" event={"ID":"2fb167ef-23b4-4c65-bd65-a0219101b109","Type":"ContainerStarted","Data":"ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7"} Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.239637 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" event={"ID":"2fb167ef-23b4-4c65-bd65-a0219101b109","Type":"ContainerStarted","Data":"4455442ee247c774aca0a6aca9de2df094e7484e822d0ff427f9721b87f3e0a0"} Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.341357 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.345543 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.345578 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.345592 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.345611 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.345624 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:40Z","lastTransitionTime":"2025-12-02T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.445091 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.485762 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.495391 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.495442 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.495453 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.495475 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.495487 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:40Z","lastTransitionTime":"2025-12-02T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.509933 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.618197 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.628756 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.647600 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.663530 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.668701 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.668737 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.668747 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.668767 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.668779 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:40Z","lastTransitionTime":"2025-12-02T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.683549 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.701640 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.717414 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.733072 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.756825 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.773182 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.773229 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.773244 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.773270 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.773291 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:40Z","lastTransitionTime":"2025-12-02T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.782020 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.797278 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:40Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.876190 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.876264 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.876280 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.876341 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:40 crc kubenswrapper[4625]: I1202 13:44:40.876372 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:40Z","lastTransitionTime":"2025-12-02T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:40.979128 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:40.979168 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:40.979189 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:40.979209 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:40.979219 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:40Z","lastTransitionTime":"2025-12-02T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.100854 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.100916 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.100928 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.100949 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.100964 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.204037 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.204085 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.204096 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.204121 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.204134 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.284065 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" event={"ID":"2fb167ef-23b4-4c65-bd65-a0219101b109","Type":"ContainerStarted","Data":"9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.305406 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.307275 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.307348 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.307365 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.307383 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.307394 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.315885 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.329713 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.343058 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.362700 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.375745 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.389721 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.403502 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.410747 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.410805 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.410817 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.410843 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.410856 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.418253 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.431462 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.444255 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.460021 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.470078 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-x94k8"] Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.470861 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.470991 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.477500 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.488331 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.500080 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.512711 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.513530 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.513585 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.513609 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.513631 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.513647 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.528689 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.543158 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.555328 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.565392 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.578346 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jh6\" (UniqueName: \"kubernetes.io/projected/23fa40dc-ba01-4997-bb3f-c9774637dc22-kube-api-access-v8jh6\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.578439 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.593937 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.619366 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.619438 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.619469 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.619496 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.619513 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.623897 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.648926 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.675611 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679285 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679347 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679364 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679370 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679387 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679402 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679518 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679541 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679569 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679662 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.679633826 +0000 UTC m=+53.641810911 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679717 4625 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679739 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679756 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679769 4625 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679794 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.67977962 +0000 UTC m=+53.641956695 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679814 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.679804751 +0000 UTC m=+53.641981826 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679590 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jh6\" (UniqueName: \"kubernetes.io/projected/23fa40dc-ba01-4997-bb3f-c9774637dc22-kube-api-access-v8jh6\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679833 4625 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.679871 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679948 4625 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679948 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs podName:23fa40dc-ba01-4997-bb3f-c9774637dc22 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:42.179919804 +0000 UTC m=+38.142096879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs") pod "network-metrics-daemon-x94k8" (UID: "23fa40dc-ba01-4997-bb3f-c9774637dc22") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.679989 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.679982945 +0000 UTC m=+53.642160020 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.707741 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.708710 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.715400 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jh6\" (UniqueName: \"kubernetes.io/projected/23fa40dc-ba01-4997-bb3f-c9774637dc22-kube-api-access-v8jh6\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.719429 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.719492 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.719504 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.719526 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.719542 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.734730 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.740052 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.750482 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.751284 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.751469 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.751555 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.751624 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.751689 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.809949 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.810252 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.810561 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.810635 4625 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.810762 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.810742942 +0000 UTC m=+53.772920017 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.839890 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.865995 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.866038 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.866050 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.866073 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.866085 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.871420 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.876619 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.876795 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.877262 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.877336 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.877397 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.877448 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.880902 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.886478 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.886517 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.886527 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.886544 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.886555 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.890689 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.901111 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: E1202 13:44:41.901241 4625 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.903021 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.903044 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.903055 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.903077 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.903089 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:41Z","lastTransitionTime":"2025-12-02T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.903834 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:41 crc kubenswrapper[4625]: I1202 13:44:41.916069 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:41Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.005377 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.005410 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.005419 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.005434 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.005444 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:42Z","lastTransitionTime":"2025-12-02T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.108200 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.108246 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.108256 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.108274 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.108285 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:42Z","lastTransitionTime":"2025-12-02T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.212273 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.212342 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.212353 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.212373 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.212387 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:42Z","lastTransitionTime":"2025-12-02T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.237605 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:42 crc kubenswrapper[4625]: E1202 13:44:42.237868 4625 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:42 crc kubenswrapper[4625]: E1202 13:44:42.238006 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs podName:23fa40dc-ba01-4997-bb3f-c9774637dc22 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:43.237973971 +0000 UTC m=+39.200151226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs") pod "network-metrics-daemon-x94k8" (UID: "23fa40dc-ba01-4997-bb3f-c9774637dc22") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.406127 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.406177 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.406190 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.406214 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.406229 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:42Z","lastTransitionTime":"2025-12-02T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.463866 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" event={"ID":"c3810fa9-85cb-4c38-a835-57f56463ff66","Type":"ContainerStarted","Data":"959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47"} Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.666593 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.666654 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.666666 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.666689 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.666702 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:42Z","lastTransitionTime":"2025-12-02T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.681598 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.698839 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.712611 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.732183 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.755477 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.777174 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.793669 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.814294 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.831414 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.849847 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.862483 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.862580 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.862598 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.862623 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.862643 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:42Z","lastTransitionTime":"2025-12-02T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.986437 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.986470 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.986833 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.986867 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:42 crc kubenswrapper[4625]: I1202 13:44:42.986878 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:42Z","lastTransitionTime":"2025-12-02T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.003766 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:42Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.039288 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.185322 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.185399 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.185414 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.185437 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.185476 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:43Z","lastTransitionTime":"2025-12-02T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.187043 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.226488 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.285274 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.285568 4625 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.285653 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs podName:23fa40dc-ba01-4997-bb3f-c9774637dc22 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:45.285630823 +0000 UTC m=+41.247807888 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs") pod "network-metrics-daemon-x94k8" (UID: "23fa40dc-ba01-4997-bb3f-c9774637dc22") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.290976 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.291019 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.291031 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.291048 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.291058 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:43Z","lastTransitionTime":"2025-12-02T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.327747 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.342940 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:43Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.394498 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.394674 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.394703 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.394739 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.394753 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:43Z","lastTransitionTime":"2025-12-02T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.497775 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.497822 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.497834 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.497857 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.497870 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:43Z","lastTransitionTime":"2025-12-02T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.601392 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.601949 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.601962 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.601981 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.601993 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:43Z","lastTransitionTime":"2025-12-02T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.704900 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.704985 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.704998 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.705025 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.705040 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:43Z","lastTransitionTime":"2025-12-02T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.808352 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.808422 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.808445 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.808470 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.808485 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:43Z","lastTransitionTime":"2025-12-02T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.855777 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.855831 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.855873 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.855777 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.855961 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.856658 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.856776 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.856852 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.890687 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.892209 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.892271 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6 is running failed: container process not found" containerID="13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.893234 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6 is running failed: container process not found" containerID="13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.893651 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6 is running failed: container process not found" containerID="13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.893713 4625 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.894168 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6 is running failed: container process not found" containerID="13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.894659 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6 is running failed: container process not found" containerID="13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.894939 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6 is running failed: container process not found" containerID="13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 02 13:44:43 crc kubenswrapper[4625]: E1202 13:44:43.894998 4625 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.911137 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.911205 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.911221 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.911242 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:43 crc kubenswrapper[4625]: I1202 13:44:43.911259 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:43Z","lastTransitionTime":"2025-12-02T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.014517 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.014605 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.014622 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.014644 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.014657 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:44Z","lastTransitionTime":"2025-12-02T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.116990 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.117039 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.117052 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.117072 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.117085 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:44Z","lastTransitionTime":"2025-12-02T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.220901 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.220972 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.220984 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.221005 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.221016 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:44Z","lastTransitionTime":"2025-12-02T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.324156 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.324201 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.324210 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.324228 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.324242 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:44Z","lastTransitionTime":"2025-12-02T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.427925 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.427970 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.427979 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.427995 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.428007 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:44Z","lastTransitionTime":"2025-12-02T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.473298 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/0.log" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.476178 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6" exitCode=1 Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.476244 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.477164 4625 scope.go:117] "RemoveContainer" containerID="13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.494087 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.513556 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.530103 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.530511 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.530592 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.530660 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.530714 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:44Z","lastTransitionTime":"2025-12-02T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.535712 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.560895 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.578271 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.597504 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.612916 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.634813 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.635348 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.635462 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.635614 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.635705 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:44Z","lastTransitionTime":"2025-12-02T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.638106 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:44:43Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:44:43.508475 5810 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:44:43.508521 5810 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:44:43.508575 5810 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:44:43.508594 5810 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:44:43.508604 5810 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:44:43.508620 5810 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:44:43.508625 5810 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:44:43.508640 5810 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:44:43.508649 5810 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:44:43.508659 5810 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:44:43.508663 5810 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 13:44:43.508674 5810 factory.go:656] Stopping watch factory\\\\nI1202 13:44:43.508676 5810 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:44:43.508691 5810 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:44:43.508694 5810 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.660234 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.678381 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.696877 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.713975 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.728149 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.738810 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.738876 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.738887 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.738907 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.739288 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:44Z","lastTransitionTime":"2025-12-02T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.745216 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.764104 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.780996 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.842422 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.842471 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.842484 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.842502 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.842515 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:44Z","lastTransitionTime":"2025-12-02T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.880616 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.904668 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.927259 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.945773 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.945822 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.945834 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.945866 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.945881 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:44Z","lastTransitionTime":"2025-12-02T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.948819 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.966632 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:44 crc kubenswrapper[4625]: I1202 13:44:44.988829 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.003172 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:44Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.018202 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.038788 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.048199 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.048232 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.048242 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.048261 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.048271 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:45Z","lastTransitionTime":"2025-12-02T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.053093 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.067206 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.093793 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:44:43Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:44:43.508475 5810 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:44:43.508521 5810 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:44:43.508575 5810 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:44:43.508594 5810 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:44:43.508604 5810 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:44:43.508620 5810 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:44:43.508625 5810 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:44:43.508640 5810 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:44:43.508649 5810 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:44:43.508659 5810 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:44:43.508663 5810 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 13:44:43.508674 5810 factory.go:656] Stopping watch factory\\\\nI1202 13:44:43.508676 5810 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:44:43.508691 5810 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:44:43.508694 5810 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.112851 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.127153 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.143582 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.151357 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.151409 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.151423 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.151444 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.151459 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:45Z","lastTransitionTime":"2025-12-02T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.159407 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.254073 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.254117 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.254128 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.254147 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.254159 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:45Z","lastTransitionTime":"2025-12-02T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.304924 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:45 crc kubenswrapper[4625]: E1202 13:44:45.305118 4625 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:45 crc kubenswrapper[4625]: E1202 13:44:45.305214 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs podName:23fa40dc-ba01-4997-bb3f-c9774637dc22 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:49.305186592 +0000 UTC m=+45.267363667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs") pod "network-metrics-daemon-x94k8" (UID: "23fa40dc-ba01-4997-bb3f-c9774637dc22") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.357627 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.357678 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.357691 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.357708 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.357720 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:45Z","lastTransitionTime":"2025-12-02T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.460389 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.460433 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.460446 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.460469 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.460484 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:45Z","lastTransitionTime":"2025-12-02T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.482233 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/0.log" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.484591 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733"} Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.485057 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.506893 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:44:43Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:44:43.508475 5810 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:44:43.508521 5810 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:44:43.508575 5810 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:44:43.508594 5810 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:44:43.508604 5810 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:44:43.508620 5810 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:44:43.508625 5810 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:44:43.508640 5810 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:44:43.508649 5810 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:44:43.508659 5810 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:44:43.508663 5810 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 13:44:43.508674 5810 factory.go:656] Stopping watch factory\\\\nI1202 13:44:43.508676 5810 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:44:43.508691 5810 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:44:43.508694 5810 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.522751 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.539363 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.556795 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.563586 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.563648 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.563663 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.563691 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.563706 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:45Z","lastTransitionTime":"2025-12-02T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.573870 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.589817 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.605189 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.621347 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.643189 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.662854 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.667017 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.667059 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.667072 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.667094 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.667110 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:45Z","lastTransitionTime":"2025-12-02T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.679974 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.695961 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.712087 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.733929 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.747983 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.761500 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:45Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.770918 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.770954 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.770964 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.770997 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.771009 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:45Z","lastTransitionTime":"2025-12-02T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.855886 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.855929 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.855886 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.855929 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:45 crc kubenswrapper[4625]: E1202 13:44:45.856099 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:45 crc kubenswrapper[4625]: E1202 13:44:45.856218 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:45 crc kubenswrapper[4625]: E1202 13:44:45.856350 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:44:45 crc kubenswrapper[4625]: E1202 13:44:45.856470 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.873463 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.873510 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.873520 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.873555 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.873568 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:45Z","lastTransitionTime":"2025-12-02T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.976562 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.976691 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.976717 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.976738 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:45 crc kubenswrapper[4625]: I1202 13:44:45.976751 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:45Z","lastTransitionTime":"2025-12-02T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.080439 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.080513 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.080523 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.080542 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.080554 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:46Z","lastTransitionTime":"2025-12-02T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.183986 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.184047 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.184060 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.184084 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.184134 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:46Z","lastTransitionTime":"2025-12-02T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.287344 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.287376 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.287385 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.287404 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.287415 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:46Z","lastTransitionTime":"2025-12-02T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.390116 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.390185 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.390209 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.390232 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.390246 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:46Z","lastTransitionTime":"2025-12-02T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.492369 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.492424 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.492436 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.492455 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.492467 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:46Z","lastTransitionTime":"2025-12-02T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.492891 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/1.log" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.493739 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/0.log" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.496345 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733" exitCode=1 Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.496472 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733"} Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.496611 4625 scope.go:117] "RemoveContainer" containerID="13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.497578 4625 scope.go:117] "RemoveContainer" containerID="58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733" Dec 02 13:44:46 crc kubenswrapper[4625]: E1202 13:44:46.498130 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.518409 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.537351 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.555089 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.569751 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.583423 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.596832 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.596884 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.596899 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.596921 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.596934 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:46Z","lastTransitionTime":"2025-12-02T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.603945 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.620294 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.635240 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.649170 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.666897 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.681690 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.700590 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.700636 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.700648 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.700669 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.700681 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:46Z","lastTransitionTime":"2025-12-02T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.703209 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13bdc18d1fccab1fb63d3e337b36c2572933f4aa86622d509c79a2ed4990deb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:44:43Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:44:43.508475 5810 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:44:43.508521 5810 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:44:43.508575 5810 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:44:43.508594 5810 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:44:43.508604 5810 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:44:43.508620 5810 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1202 13:44:43.508625 5810 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1202 13:44:43.508640 5810 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:44:43.508649 5810 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:44:43.508659 5810 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:44:43.508663 5810 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1202 13:44:43.508674 5810 factory.go:656] Stopping watch factory\\\\nI1202 13:44:43.508676 5810 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:44:43.508691 5810 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:44:43.508694 5810 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:44:45Z\\\",\\\"message\\\":\\\"60\\\\nI1202 13:44:45.335343 6039 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1202 13:44:45.335367 6039 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:44:45.335405 6039 factory.go:656] Stopping watch factory\\\\nI1202 13:44:45.335417 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:44:45.335587 6039 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:44:45.335914 6039 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:44:45.336042 6039 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:44:45.336548 6039 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:44:45.337158 6039 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:44:45.337193 6039 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 13:44:45.337284 6039 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.719079 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.743335 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.758522 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.772269 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:46Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.803803 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.803867 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.803882 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.803905 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.803918 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:46Z","lastTransitionTime":"2025-12-02T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.907759 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.907819 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.907837 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.907860 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:46 crc kubenswrapper[4625]: I1202 13:44:46.907875 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:46Z","lastTransitionTime":"2025-12-02T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.010689 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.010779 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.010809 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.010828 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.010838 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:47Z","lastTransitionTime":"2025-12-02T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.114015 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.114052 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.114061 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.114082 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.114094 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:47Z","lastTransitionTime":"2025-12-02T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.218081 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.218126 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.218137 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.218159 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.218171 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:47Z","lastTransitionTime":"2025-12-02T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.321548 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.321588 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.321599 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.321621 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.321632 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:47Z","lastTransitionTime":"2025-12-02T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.424390 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.424444 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.424457 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.424480 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.424493 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:47Z","lastTransitionTime":"2025-12-02T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.504431 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/1.log" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.508423 4625 scope.go:117] "RemoveContainer" containerID="58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733" Dec 02 13:44:47 crc kubenswrapper[4625]: E1202 13:44:47.508616 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.528110 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.528154 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.528165 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.528183 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.528193 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:47Z","lastTransitionTime":"2025-12-02T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.532063 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.550436 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.567890 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.583249 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.603085 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.617685 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.631054 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.631120 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.631134 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.631158 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.631172 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:47Z","lastTransitionTime":"2025-12-02T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.632174 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.647061 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.662881 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.676980 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.700590 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:44:45Z\\\",\\\"message\\\":\\\"60\\\\nI1202 13:44:45.335343 6039 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1202 13:44:45.335367 6039 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:44:45.335405 6039 factory.go:656] Stopping watch factory\\\\nI1202 13:44:45.335417 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:44:45.335587 6039 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:44:45.335914 6039 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:44:45.336042 6039 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:44:45.336548 6039 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:44:45.337158 6039 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:44:45.337193 6039 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 13:44:45.337284 6039 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.716532 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.730764 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.733654 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.733700 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.733714 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.733735 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.733787 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:47Z","lastTransitionTime":"2025-12-02T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.746505 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.759850 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.772518 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:47Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.836884 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.836944 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.836954 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.836976 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.836989 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:47Z","lastTransitionTime":"2025-12-02T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.855433 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.855481 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.855530 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.855577 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:47 crc kubenswrapper[4625]: E1202 13:44:47.855745 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:47 crc kubenswrapper[4625]: E1202 13:44:47.855869 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:47 crc kubenswrapper[4625]: E1202 13:44:47.856019 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:47 crc kubenswrapper[4625]: E1202 13:44:47.856141 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.941566 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.941637 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.941664 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.941685 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:47 crc kubenswrapper[4625]: I1202 13:44:47.941697 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:47Z","lastTransitionTime":"2025-12-02T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.045102 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.045142 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.045152 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.045169 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.045179 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:48Z","lastTransitionTime":"2025-12-02T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.148208 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.148261 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.148270 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.148287 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.148298 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:48Z","lastTransitionTime":"2025-12-02T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.251562 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.251629 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.251645 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.251669 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.251684 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:48Z","lastTransitionTime":"2025-12-02T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.354118 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.354154 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.354178 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.354196 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.354205 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:48Z","lastTransitionTime":"2025-12-02T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.457024 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.457074 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.457094 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.457117 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.457131 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:48Z","lastTransitionTime":"2025-12-02T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.559944 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.559994 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.560004 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.560023 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.560034 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:48Z","lastTransitionTime":"2025-12-02T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.663283 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.663397 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.663412 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.663434 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.663449 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:48Z","lastTransitionTime":"2025-12-02T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.766610 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.766692 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.766703 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.766721 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.766731 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:48Z","lastTransitionTime":"2025-12-02T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.869750 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.869826 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.869850 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.869892 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.869908 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:48Z","lastTransitionTime":"2025-12-02T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.973670 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.973716 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.973728 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.973755 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:48 crc kubenswrapper[4625]: I1202 13:44:48.973774 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:48Z","lastTransitionTime":"2025-12-02T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.076387 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.076485 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.076497 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.076525 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.076541 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:49Z","lastTransitionTime":"2025-12-02T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.180153 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.180207 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.180233 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.180256 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.180273 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:49Z","lastTransitionTime":"2025-12-02T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.283084 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.283137 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.283149 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.283171 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.283183 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:49Z","lastTransitionTime":"2025-12-02T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.347701 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:49 crc kubenswrapper[4625]: E1202 13:44:49.347968 4625 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:49 crc kubenswrapper[4625]: E1202 13:44:49.348119 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs podName:23fa40dc-ba01-4997-bb3f-c9774637dc22 nodeName:}" failed. No retries permitted until 2025-12-02 13:44:57.348085492 +0000 UTC m=+53.310262737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs") pod "network-metrics-daemon-x94k8" (UID: "23fa40dc-ba01-4997-bb3f-c9774637dc22") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.386467 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.386505 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.386515 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.386532 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.386544 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:49Z","lastTransitionTime":"2025-12-02T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.490721 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.490778 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.490791 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.490816 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.490832 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:49Z","lastTransitionTime":"2025-12-02T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.593799 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.593858 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.593905 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.593927 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.593940 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:49Z","lastTransitionTime":"2025-12-02T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.696710 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.696761 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.696776 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.696801 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.696823 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:49Z","lastTransitionTime":"2025-12-02T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.800062 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.800111 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.800123 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.800141 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.800154 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:49Z","lastTransitionTime":"2025-12-02T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.855102 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.855108 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.855150 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:49 crc kubenswrapper[4625]: E1202 13:44:49.855752 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:49 crc kubenswrapper[4625]: E1202 13:44:49.855762 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.855165 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:49 crc kubenswrapper[4625]: E1202 13:44:49.855832 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:49 crc kubenswrapper[4625]: E1202 13:44:49.855951 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.903444 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.903499 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.903511 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.903532 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:49 crc kubenswrapper[4625]: I1202 13:44:49.903546 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:49Z","lastTransitionTime":"2025-12-02T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.006335 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.006380 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.006392 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.006415 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.006434 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:50Z","lastTransitionTime":"2025-12-02T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.111479 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.111949 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.112167 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.112268 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.112402 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:50Z","lastTransitionTime":"2025-12-02T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.215407 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.215503 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.215516 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.215539 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.215554 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:50Z","lastTransitionTime":"2025-12-02T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.318580 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.319003 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.319082 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.319195 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.319267 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:50Z","lastTransitionTime":"2025-12-02T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.422933 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.423504 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.423606 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.423683 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.423750 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:50Z","lastTransitionTime":"2025-12-02T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.535121 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.535166 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.535177 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.535192 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.535202 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:50Z","lastTransitionTime":"2025-12-02T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.638123 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.638196 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.638209 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.638232 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.638247 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:50Z","lastTransitionTime":"2025-12-02T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.742073 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.742587 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.742693 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.742840 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.742944 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:50Z","lastTransitionTime":"2025-12-02T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.845679 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.845716 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.845728 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.845747 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.845760 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:50Z","lastTransitionTime":"2025-12-02T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.948358 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.948405 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.948415 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.948437 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:50 crc kubenswrapper[4625]: I1202 13:44:50.948447 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:50Z","lastTransitionTime":"2025-12-02T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.051641 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.051698 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.051712 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.051732 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.051744 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:51Z","lastTransitionTime":"2025-12-02T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.155043 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.155481 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.155616 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.155711 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.155780 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:51Z","lastTransitionTime":"2025-12-02T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.259406 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.259472 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.259490 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.259513 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.259531 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:51Z","lastTransitionTime":"2025-12-02T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.363167 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.363617 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.363750 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.363885 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.364000 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:51Z","lastTransitionTime":"2025-12-02T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.468271 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.468333 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.468347 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.468365 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.468377 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:51Z","lastTransitionTime":"2025-12-02T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.570830 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.571101 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.571167 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.571240 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.571328 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:51Z","lastTransitionTime":"2025-12-02T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.674844 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.674892 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.674904 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.674920 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.674932 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:51Z","lastTransitionTime":"2025-12-02T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.777691 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.777745 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.777761 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.777786 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.777802 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:51Z","lastTransitionTime":"2025-12-02T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.855664 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.855734 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.855683 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.855683 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:51 crc kubenswrapper[4625]: E1202 13:44:51.855848 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:51 crc kubenswrapper[4625]: E1202 13:44:51.856002 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:51 crc kubenswrapper[4625]: E1202 13:44:51.856139 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:51 crc kubenswrapper[4625]: E1202 13:44:51.856249 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.880666 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.880721 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.880730 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.880749 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.880759 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:51Z","lastTransitionTime":"2025-12-02T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.983128 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.983194 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.983207 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.983234 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:51 crc kubenswrapper[4625]: I1202 13:44:51.983244 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:51Z","lastTransitionTime":"2025-12-02T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.086754 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.086825 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.086836 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.086861 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.086874 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.190750 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.190811 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.190823 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.190842 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.190855 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.196437 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.196495 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.196515 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.196541 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.196557 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: E1202 13:44:52.210866 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.216579 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.216628 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.216638 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.216655 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.216668 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: E1202 13:44:52.233797 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.236777 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.240546 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.240755 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.240865 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.240938 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.241005 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.251290 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 02 13:44:52 crc kubenswrapper[4625]: E1202 13:44:52.255402 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.257359 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.259061 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.259116 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.259129 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.259151 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.259163 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.274605 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: E1202 13:44:52.279712 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.283840 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.283886 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.283899 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.283918 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.283930 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.290042 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: E1202 13:44:52.295206 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: E1202 13:44:52.295669 4625 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.298865 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.298907 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.298919 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.298936 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.298947 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.311208 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:44:45Z\\\",\\\"message\\\":\\\"60\\\\nI1202 13:44:45.335343 6039 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1202 13:44:45.335367 6039 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:44:45.335405 6039 factory.go:656] Stopping watch factory\\\\nI1202 13:44:45.335417 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:44:45.335587 6039 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:44:45.335914 6039 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:44:45.336042 6039 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:44:45.336548 6039 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:44:45.337158 6039 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:44:45.337193 6039 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 13:44:45.337284 6039 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.326794 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.343920 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.360847 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.374274 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.385585 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.400799 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.401693 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.401757 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.401775 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.401797 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.401810 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.423826 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.439757 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.453539 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.468730 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.479952 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.493432 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:52Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.504297 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.504342 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.504353 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.504367 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.504380 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.607599 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.607860 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.607977 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.608067 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.608284 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.711046 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.711652 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.711735 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.711822 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.711920 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.814413 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.814477 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.814488 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.814505 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.814516 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.917434 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.917512 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.917551 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.917565 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:52 crc kubenswrapper[4625]: I1202 13:44:52.917575 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:52Z","lastTransitionTime":"2025-12-02T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.019855 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.019886 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.019897 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.019913 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.019925 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:53Z","lastTransitionTime":"2025-12-02T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.122933 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.122972 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.122981 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.123002 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.123013 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:53Z","lastTransitionTime":"2025-12-02T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.226384 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.226661 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.226747 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.226847 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.226968 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:53Z","lastTransitionTime":"2025-12-02T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.330459 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.330711 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.330837 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.331041 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.331225 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:53Z","lastTransitionTime":"2025-12-02T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.434953 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.434998 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.435010 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.435030 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.435039 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:53Z","lastTransitionTime":"2025-12-02T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.560928 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.561477 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.561622 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.561756 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.561872 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:53Z","lastTransitionTime":"2025-12-02T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.665599 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.666075 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.666199 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.666375 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.666508 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:53Z","lastTransitionTime":"2025-12-02T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.776753 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.776937 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.776974 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.777004 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.777031 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:53Z","lastTransitionTime":"2025-12-02T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.855693 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.855748 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.855748 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.855858 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:53 crc kubenswrapper[4625]: E1202 13:44:53.856099 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:44:53 crc kubenswrapper[4625]: E1202 13:44:53.856520 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:53 crc kubenswrapper[4625]: E1202 13:44:53.856453 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:53 crc kubenswrapper[4625]: E1202 13:44:53.856636 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.879353 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.879409 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.879421 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.879442 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.879456 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:53Z","lastTransitionTime":"2025-12-02T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.982300 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.982389 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.982405 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.982426 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:53 crc kubenswrapper[4625]: I1202 13:44:53.982439 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:53Z","lastTransitionTime":"2025-12-02T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.084863 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.084924 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.084936 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.084958 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.084969 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:54Z","lastTransitionTime":"2025-12-02T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.188249 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.188294 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.188303 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.188339 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.188350 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:54Z","lastTransitionTime":"2025-12-02T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.291704 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.291749 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.291759 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.291773 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.291785 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:54Z","lastTransitionTime":"2025-12-02T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.394556 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.394629 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.394642 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.394668 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.394686 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:54Z","lastTransitionTime":"2025-12-02T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.497602 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.497675 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.497691 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.497716 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.497735 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:54Z","lastTransitionTime":"2025-12-02T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.599791 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.599825 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.599839 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.599855 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.599866 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:54Z","lastTransitionTime":"2025-12-02T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.702827 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.702901 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.702924 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.702942 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.702959 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:54Z","lastTransitionTime":"2025-12-02T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.804704 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.804762 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.804774 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.804790 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.804808 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:54Z","lastTransitionTime":"2025-12-02T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.873035 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.889909 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.905253 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.906639 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.906682 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.906694 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.906712 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.906725 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:54Z","lastTransitionTime":"2025-12-02T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.915853 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.936351 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:44:45Z\\\",\\\"message\\\":\\\"60\\\\nI1202 13:44:45.335343 6039 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1202 13:44:45.335367 6039 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:44:45.335405 6039 factory.go:656] Stopping watch factory\\\\nI1202 13:44:45.335417 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:44:45.335587 6039 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:44:45.335914 6039 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:44:45.336042 6039 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:44:45.336548 6039 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:44:45.337158 6039 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:44:45.337193 6039 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 13:44:45.337284 6039 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.950930 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.963820 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.977079 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.988275 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:54 crc kubenswrapper[4625]: I1202 13:44:54.998675 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:54Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.009760 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.009792 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.009802 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.009818 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.009828 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:55Z","lastTransitionTime":"2025-12-02T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.011834 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.028723 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.043946 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.056899 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.070262 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.087847 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.098113 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:44:55Z is after 2025-08-24T17:21:41Z" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.112075 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.112261 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.112394 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.112470 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.112558 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:55Z","lastTransitionTime":"2025-12-02T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.219804 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.220675 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.220752 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.220852 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.221531 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:55Z","lastTransitionTime":"2025-12-02T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.324688 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.324929 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.325009 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.325128 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.325230 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:55Z","lastTransitionTime":"2025-12-02T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.428180 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.428212 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.428221 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.428237 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.428247 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:55Z","lastTransitionTime":"2025-12-02T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.531327 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.531623 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.531693 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.531778 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.531853 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:55Z","lastTransitionTime":"2025-12-02T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.635563 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.635806 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.635817 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.635838 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.635851 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:55Z","lastTransitionTime":"2025-12-02T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.738818 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.739167 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.739279 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.739416 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.739519 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:55Z","lastTransitionTime":"2025-12-02T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.875750 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.875750 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.875767 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.876043 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:55 crc kubenswrapper[4625]: E1202 13:44:55.876553 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:55 crc kubenswrapper[4625]: E1202 13:44:55.876600 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:55 crc kubenswrapper[4625]: E1202 13:44:55.876664 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:55 crc kubenswrapper[4625]: E1202 13:44:55.876724 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.877279 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.877302 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.877327 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.877340 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.877351 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:55Z","lastTransitionTime":"2025-12-02T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.980156 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.980212 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.980222 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.980515 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:55 crc kubenswrapper[4625]: I1202 13:44:55.980528 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:55Z","lastTransitionTime":"2025-12-02T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.083342 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.083825 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.083909 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.083998 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.084068 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:56Z","lastTransitionTime":"2025-12-02T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.186482 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.186525 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.186537 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.186558 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.186574 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:56Z","lastTransitionTime":"2025-12-02T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.289764 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.289920 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.289937 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.289960 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.289974 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:56Z","lastTransitionTime":"2025-12-02T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.393742 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.393802 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.393822 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.393850 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.393869 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:56Z","lastTransitionTime":"2025-12-02T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.497737 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.497817 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.497845 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.497882 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.497908 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:56Z","lastTransitionTime":"2025-12-02T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.601200 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.601237 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.601251 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.601278 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.601296 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:56Z","lastTransitionTime":"2025-12-02T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.704716 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.705046 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.705261 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.705501 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.705733 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:56Z","lastTransitionTime":"2025-12-02T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.809423 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.809474 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.809487 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.809510 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.809522 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:56Z","lastTransitionTime":"2025-12-02T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.913499 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.913550 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.913564 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.913585 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:56 crc kubenswrapper[4625]: I1202 13:44:56.913603 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:56Z","lastTransitionTime":"2025-12-02T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.019177 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.019261 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.019277 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.019303 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.019335 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:57Z","lastTransitionTime":"2025-12-02T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.122630 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.122676 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.122687 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.122705 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.122719 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:57Z","lastTransitionTime":"2025-12-02T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.225407 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.225459 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.225473 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.225494 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.225508 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:57Z","lastTransitionTime":"2025-12-02T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.329104 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.329151 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.329161 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.329181 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.329198 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:57Z","lastTransitionTime":"2025-12-02T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.431507 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.432033 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.432144 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.432273 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.432494 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:57Z","lastTransitionTime":"2025-12-02T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.434828 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.434972 4625 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.435017 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs podName:23fa40dc-ba01-4997-bb3f-c9774637dc22 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:13.435004363 +0000 UTC m=+69.397181438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs") pod "network-metrics-daemon-x94k8" (UID: "23fa40dc-ba01-4997-bb3f-c9774637dc22") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.535877 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.535915 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.535926 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.535943 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.535955 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:57Z","lastTransitionTime":"2025-12-02T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.639653 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.639704 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.639717 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.639740 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.639755 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:57Z","lastTransitionTime":"2025-12-02T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.737860 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.737984 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:45:29.737963477 +0000 UTC m=+85.700140552 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.738088 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.738166 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.738264 4625 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.738347 4625 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.738393 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:29.738383678 +0000 UTC m=+85.700560753 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.738430 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:29.738401518 +0000 UTC m=+85.700578633 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.738197 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.738790 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.738845 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.738871 4625 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.738983 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:29.738946092 +0000 UTC m=+85.701123177 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.743830 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.743860 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.743871 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.743888 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.743899 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:57Z","lastTransitionTime":"2025-12-02T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.839513 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.839705 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.839913 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.839988 4625 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.840104 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:29.840090028 +0000 UTC m=+85.802267103 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.846059 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.846365 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.846446 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.846544 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.846624 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:57Z","lastTransitionTime":"2025-12-02T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.855706 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.855829 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.856006 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.856038 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.856067 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.856134 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.856215 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:57 crc kubenswrapper[4625]: E1202 13:44:57.856467 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.948988 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.949029 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.949040 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.949054 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:57 crc kubenswrapper[4625]: I1202 13:44:57.949065 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:57Z","lastTransitionTime":"2025-12-02T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.052460 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.052503 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.052514 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.052532 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.052546 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:58Z","lastTransitionTime":"2025-12-02T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.156041 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.156097 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.156117 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.156140 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.156156 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:58Z","lastTransitionTime":"2025-12-02T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.259226 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.259279 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.259291 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.259330 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.259346 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:58Z","lastTransitionTime":"2025-12-02T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.361588 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.361626 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.361763 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.361782 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.361795 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:58Z","lastTransitionTime":"2025-12-02T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.464058 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.464389 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.464467 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.464552 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.464630 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:58Z","lastTransitionTime":"2025-12-02T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.567085 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.567134 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.567147 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.567166 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.567179 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:58Z","lastTransitionTime":"2025-12-02T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.669985 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.670014 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.670022 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.670036 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.670046 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:58Z","lastTransitionTime":"2025-12-02T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.773235 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.773283 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.773293 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.773326 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.773337 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:58Z","lastTransitionTime":"2025-12-02T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.881079 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.881144 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.881156 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.881187 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.881204 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:58Z","lastTransitionTime":"2025-12-02T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.985944 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.987083 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.987510 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.987892 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:58 crc kubenswrapper[4625]: I1202 13:44:58.988255 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:58Z","lastTransitionTime":"2025-12-02T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.091717 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.091986 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.092056 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.092122 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.092187 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:59Z","lastTransitionTime":"2025-12-02T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.194986 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.195036 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.195049 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.195064 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.195076 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:59Z","lastTransitionTime":"2025-12-02T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.298463 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.299141 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.299325 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.299513 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.299692 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:59Z","lastTransitionTime":"2025-12-02T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.402564 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.403129 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.403320 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.403398 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.403441 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:59Z","lastTransitionTime":"2025-12-02T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.506454 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.506543 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.506557 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.506580 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.506593 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:59Z","lastTransitionTime":"2025-12-02T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.609079 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.609146 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.609160 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.609179 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.609193 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:59Z","lastTransitionTime":"2025-12-02T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.712708 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.713138 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.713374 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.713615 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.713768 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:59Z","lastTransitionTime":"2025-12-02T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.817531 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.817583 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.817595 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.817619 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.817634 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:59Z","lastTransitionTime":"2025-12-02T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.855127 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.855196 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.855202 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:44:59 crc kubenswrapper[4625]: E1202 13:44:59.855346 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:44:59 crc kubenswrapper[4625]: E1202 13:44:59.855471 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.855858 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:44:59 crc kubenswrapper[4625]: E1202 13:44:59.855896 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.856216 4625 scope.go:117] "RemoveContainer" containerID="58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733" Dec 02 13:44:59 crc kubenswrapper[4625]: E1202 13:44:59.856853 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.920881 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.920941 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.920954 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.920982 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:44:59 crc kubenswrapper[4625]: I1202 13:44:59.920995 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:44:59Z","lastTransitionTime":"2025-12-02T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.024733 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.024808 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.024820 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.024844 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.024853 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:00Z","lastTransitionTime":"2025-12-02T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.129724 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.129939 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.130026 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.130117 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.130134 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:00Z","lastTransitionTime":"2025-12-02T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.232756 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.232787 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.232797 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.232812 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.232824 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:00Z","lastTransitionTime":"2025-12-02T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.335791 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.336258 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.336274 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.336296 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.336340 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:00Z","lastTransitionTime":"2025-12-02T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.439443 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.439499 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.439511 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.439530 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.439544 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:00Z","lastTransitionTime":"2025-12-02T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.541989 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.542030 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.542042 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.542059 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.542073 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:00Z","lastTransitionTime":"2025-12-02T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.557547 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/1.log" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.560404 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb"} Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.561047 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.576765 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.592282 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.607693 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.623237 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.639174 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.644864 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.644916 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.644951 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.644974 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.644990 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:00Z","lastTransitionTime":"2025-12-02T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.654506 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.666658 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.685092 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.699880 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.720124 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.734816 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.748005 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.758908 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.785494 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:44:45Z\\\",\\\"message\\\":\\\"60\\\\nI1202 13:44:45.335343 6039 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1202 13:44:45.335367 6039 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:44:45.335405 6039 factory.go:656] Stopping watch factory\\\\nI1202 13:44:45.335417 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:44:45.335587 6039 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:44:45.335914 6039 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:44:45.336042 6039 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:44:45.336548 6039 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:44:45.337158 6039 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:44:45.337193 6039 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 13:44:45.337284 6039 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.811420 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.828496 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.839397 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.839440 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.839454 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.839476 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.839488 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:00Z","lastTransitionTime":"2025-12-02T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.847158 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:00Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.942582 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.942637 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.942665 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.942686 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:00 crc kubenswrapper[4625]: I1202 13:45:00.942699 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:00Z","lastTransitionTime":"2025-12-02T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.046812 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.046846 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.046854 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.046872 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.046881 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:01Z","lastTransitionTime":"2025-12-02T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.149659 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.149719 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.149734 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.149756 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.149771 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:01Z","lastTransitionTime":"2025-12-02T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.252710 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.252762 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.252774 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.252794 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.252807 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:01Z","lastTransitionTime":"2025-12-02T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.356299 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.356416 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.356431 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.356456 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.356471 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:01Z","lastTransitionTime":"2025-12-02T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.459492 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.459556 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.459568 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.459597 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.459620 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:01Z","lastTransitionTime":"2025-12-02T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.562847 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.562973 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.562992 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.563043 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.563059 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:01Z","lastTransitionTime":"2025-12-02T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.566980 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/2.log" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.567635 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/1.log" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.571010 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb" exitCode=1 Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.571062 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.571110 4625 scope.go:117] "RemoveContainer" containerID="58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.572374 4625 scope.go:117] "RemoveContainer" containerID="9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb" Dec 02 13:45:01 crc kubenswrapper[4625]: E1202 13:45:01.573058 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.598985 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.613019 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.633960 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58e1b425dbabadc9856488a9bb7084d2b7d2747edf907ba21d756c9cd96d8733\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:44:45Z\\\",\\\"message\\\":\\\"60\\\\nI1202 13:44:45.335343 6039 services_controller.go:214] Setting up event handlers for endpoint slices for network=default\\\\nI1202 13:44:45.335367 6039 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:44:45.335405 6039 factory.go:656] Stopping watch factory\\\\nI1202 13:44:45.335417 6039 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:44:45.335587 6039 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:44:45.335914 6039 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:44:45.336042 6039 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1202 13:44:45.336548 6039 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:44:45.337158 6039 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:44:45.337193 6039 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 13:44:45.337284 6039 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:00Z\\\",\\\"message\\\":\\\" 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:45:00.960984 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:45:00.961032 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 13:45:00.961185 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:45:00.961190 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:45:00.961706 6188 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:45:00.961783 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:45:00.961795 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:45:00.961833 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:45:00.961832 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:45:00.961851 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:45:00.961884 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:45:00.961962 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:45:00.962048 6188 factory.go:656] Stopping watch factory\\\\nI1202 13:45:00.962062 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:45:00.962068 6188 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.648521 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.661594 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.665875 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.665918 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.665933 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.665955 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.665968 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:01Z","lastTransitionTime":"2025-12-02T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.680293 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.696470 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.710075 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.725227 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.739743 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.759902 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.768626 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.768671 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.768682 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.768722 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.768735 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:01Z","lastTransitionTime":"2025-12-02T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.775568 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.788965 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.804198 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.824899 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.839468 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.855353 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.855394 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.855353 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:01 crc kubenswrapper[4625]: E1202 13:45:01.855550 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.855612 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:01 crc kubenswrapper[4625]: E1202 13:45:01.855758 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:01 crc kubenswrapper[4625]: E1202 13:45:01.855826 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:01 crc kubenswrapper[4625]: E1202 13:45:01.855787 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.857573 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:01Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.872708 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.872790 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.872803 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.872823 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.872858 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:01Z","lastTransitionTime":"2025-12-02T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.975757 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.975805 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.975821 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.975845 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:01 crc kubenswrapper[4625]: I1202 13:45:01.975860 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:01Z","lastTransitionTime":"2025-12-02T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.079384 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.079448 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.079458 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.079498 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.079509 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.183467 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.183526 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.183537 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.183558 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.183577 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.287130 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.287591 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.287738 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.287820 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.287889 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.391431 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.391495 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.391508 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.391529 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.391540 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.493813 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.493891 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.493907 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.493931 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.493943 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.528291 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.528737 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.528826 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.528905 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.528969 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: E1202 13:45:02.543206 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.546878 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.546918 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.546930 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.546953 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.546965 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: E1202 13:45:02.562592 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.567940 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.568001 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.568019 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.568076 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.568095 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.576189 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/2.log" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.580483 4625 scope.go:117] "RemoveContainer" containerID="9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb" Dec 02 13:45:02 crc kubenswrapper[4625]: E1202 13:45:02.580857 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" Dec 02 13:45:02 crc kubenswrapper[4625]: E1202 13:45:02.586864 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.590935 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.590979 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.590993 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.591020 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.591034 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.597804 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: E1202 13:45:02.607207 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.611865 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.611903 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.611916 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.611935 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.611948 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.611878 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: E1202 13:45:02.625736 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: E1202 13:45:02.626583 4625 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.628586 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.629558 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.629633 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.629672 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.629693 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.629709 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.648831 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:00Z\\\",\\\"message\\\":\\\" 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:45:00.960984 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:45:00.961032 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 13:45:00.961185 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:45:00.961190 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:45:00.961706 6188 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:45:00.961783 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:45:00.961795 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:45:00.961833 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:45:00.961832 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:45:00.961851 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:45:00.961884 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:45:00.961962 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:45:00.962048 6188 factory.go:656] Stopping watch factory\\\\nI1202 13:45:00.962062 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:45:00.962068 6188 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.665434 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.680847 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.698958 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.714065 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.729998 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.732401 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.732543 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.732732 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.732836 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.733033 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.744500 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.764127 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.781880 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.795052 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.808695 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.824195 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.835607 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.835678 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.835692 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.835712 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.835747 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.835768 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.847379 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:02Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.939157 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.939220 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.939233 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.939256 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:02 crc kubenswrapper[4625]: I1202 13:45:02.939269 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:02Z","lastTransitionTime":"2025-12-02T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.042365 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.042410 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.042422 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.042435 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.042444 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:03Z","lastTransitionTime":"2025-12-02T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.145502 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.145549 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.145562 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.145581 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.145596 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:03Z","lastTransitionTime":"2025-12-02T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.248505 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.248601 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.248614 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.248633 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.248648 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:03Z","lastTransitionTime":"2025-12-02T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.352333 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.352383 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.352398 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.352421 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.352436 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:03Z","lastTransitionTime":"2025-12-02T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.455401 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.455486 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.455515 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.455538 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.455549 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:03Z","lastTransitionTime":"2025-12-02T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.558612 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.559093 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.559173 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.559268 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.559390 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:03Z","lastTransitionTime":"2025-12-02T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.663625 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.663679 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.663693 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.663719 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.663733 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:03Z","lastTransitionTime":"2025-12-02T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.766525 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.766570 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.766613 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.766631 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.766642 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:03Z","lastTransitionTime":"2025-12-02T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.855804 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.856007 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:03 crc kubenswrapper[4625]: E1202 13:45:03.856592 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.856167 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:03 crc kubenswrapper[4625]: E1202 13:45:03.856991 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:03 crc kubenswrapper[4625]: E1202 13:45:03.856663 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.856060 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:03 crc kubenswrapper[4625]: E1202 13:45:03.857255 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.869787 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.870019 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.870101 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.870164 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.870218 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:03Z","lastTransitionTime":"2025-12-02T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.973445 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.973510 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.973521 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.973541 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:03 crc kubenswrapper[4625]: I1202 13:45:03.973554 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:03Z","lastTransitionTime":"2025-12-02T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.076875 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.076927 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.076938 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.076969 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.076980 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:04Z","lastTransitionTime":"2025-12-02T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.179466 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.179510 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.179518 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.179535 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.179545 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:04Z","lastTransitionTime":"2025-12-02T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.282182 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.282232 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.282241 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.282265 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.282275 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:04Z","lastTransitionTime":"2025-12-02T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.384675 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.384722 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.384732 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.384753 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.384762 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:04Z","lastTransitionTime":"2025-12-02T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.487949 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.488355 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.488745 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.488951 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.489165 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:04Z","lastTransitionTime":"2025-12-02T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.592222 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.592625 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.592720 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.592795 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.592936 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:04Z","lastTransitionTime":"2025-12-02T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.696132 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.696183 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.696197 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.696216 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.696225 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:04Z","lastTransitionTime":"2025-12-02T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.798903 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.798947 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.798956 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.798976 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.798987 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:04Z","lastTransitionTime":"2025-12-02T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.876391 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.890634 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.901347 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.901399 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.901412 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.901433 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.901445 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:04Z","lastTransitionTime":"2025-12-02T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.904501 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.919612 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.933396 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.945111 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.962681 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:00Z\\\",\\\"message\\\":\\\" 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:45:00.960984 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:45:00.961032 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 13:45:00.961185 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:45:00.961190 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:45:00.961706 6188 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:45:00.961783 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:45:00.961795 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:45:00.961833 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:45:00.961832 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:45:00.961851 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:45:00.961884 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:45:00.961962 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:45:00.962048 6188 factory.go:656] Stopping watch factory\\\\nI1202 13:45:00.962062 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:45:00.962068 6188 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.979779 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:04 crc kubenswrapper[4625]: I1202 13:45:04.996768 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:04Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.004608 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.004737 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.004835 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.004949 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.005236 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:05Z","lastTransitionTime":"2025-12-02T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.014538 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.031440 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.047783 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.061551 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.079081 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.095598 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.110114 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.110582 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.110685 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.110263 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.110786 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.110963 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:05Z","lastTransitionTime":"2025-12-02T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.126424 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:05Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.215352 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.215429 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.215446 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.215478 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.215495 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:05Z","lastTransitionTime":"2025-12-02T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.319340 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.319801 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.319882 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.319982 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.320059 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:05Z","lastTransitionTime":"2025-12-02T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.423947 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.424291 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.424302 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.424333 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.424345 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:05Z","lastTransitionTime":"2025-12-02T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.533770 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.533828 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.533840 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.533858 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.533870 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:05Z","lastTransitionTime":"2025-12-02T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.636638 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.636685 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.636696 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.636718 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.636731 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:05Z","lastTransitionTime":"2025-12-02T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.739801 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.739882 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.739896 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.739915 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.739929 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:05Z","lastTransitionTime":"2025-12-02T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.843248 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.843304 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.843329 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.843357 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.843371 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:05Z","lastTransitionTime":"2025-12-02T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.855464 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.855464 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.855587 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:05 crc kubenswrapper[4625]: E1202 13:45:05.855660 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:05 crc kubenswrapper[4625]: E1202 13:45:05.855596 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:05 crc kubenswrapper[4625]: E1202 13:45:05.855785 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.856445 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:05 crc kubenswrapper[4625]: E1202 13:45:05.856551 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.946273 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.946338 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.946348 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.946368 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:05 crc kubenswrapper[4625]: I1202 13:45:05.946378 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:05Z","lastTransitionTime":"2025-12-02T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.049189 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.049229 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.049239 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.049256 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.049268 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:06Z","lastTransitionTime":"2025-12-02T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.153064 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.153126 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.153139 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.153161 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.153171 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:06Z","lastTransitionTime":"2025-12-02T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.255628 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.255664 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.255676 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.255696 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.255707 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:06Z","lastTransitionTime":"2025-12-02T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.358036 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.358110 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.358123 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.358141 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.358151 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:06Z","lastTransitionTime":"2025-12-02T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.461119 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.461176 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.461188 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.461205 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.461217 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:06Z","lastTransitionTime":"2025-12-02T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.565013 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.565074 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.565087 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.565112 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.565127 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:06Z","lastTransitionTime":"2025-12-02T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.668047 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.668108 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.668124 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.668146 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.668160 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:06Z","lastTransitionTime":"2025-12-02T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.770689 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.770742 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.770755 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.770773 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.770784 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:06Z","lastTransitionTime":"2025-12-02T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.874102 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.874706 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.874797 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.874904 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.874976 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:06Z","lastTransitionTime":"2025-12-02T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.978651 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.978694 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.978735 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.978759 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:06 crc kubenswrapper[4625]: I1202 13:45:06.978774 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:06Z","lastTransitionTime":"2025-12-02T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.082283 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.082402 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.082415 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.082434 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.082464 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:07Z","lastTransitionTime":"2025-12-02T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.185627 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.186101 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.186198 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.186299 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.186409 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:07Z","lastTransitionTime":"2025-12-02T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.289052 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.289114 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.289128 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.289153 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.289168 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:07Z","lastTransitionTime":"2025-12-02T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.391912 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.392142 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.392292 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.392414 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.392487 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:07Z","lastTransitionTime":"2025-12-02T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.496257 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.496329 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.496369 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.496386 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.496399 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:07Z","lastTransitionTime":"2025-12-02T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.599242 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.599284 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.599295 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.599326 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.599337 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:07Z","lastTransitionTime":"2025-12-02T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.702411 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.702485 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.702496 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.702517 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.702527 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:07Z","lastTransitionTime":"2025-12-02T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.805852 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.805917 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.805933 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.805979 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.805992 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:07Z","lastTransitionTime":"2025-12-02T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.855538 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:07 crc kubenswrapper[4625]: E1202 13:45:07.855760 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.856348 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:07 crc kubenswrapper[4625]: E1202 13:45:07.856425 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.856472 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:07 crc kubenswrapper[4625]: E1202 13:45:07.856522 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.856565 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:07 crc kubenswrapper[4625]: E1202 13:45:07.856614 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.909920 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.909965 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.909990 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.910007 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:07 crc kubenswrapper[4625]: I1202 13:45:07.910020 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:07Z","lastTransitionTime":"2025-12-02T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.012492 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.012572 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.012595 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.012625 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.012640 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:08Z","lastTransitionTime":"2025-12-02T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.115273 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.115355 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.115368 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.115387 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.115401 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:08Z","lastTransitionTime":"2025-12-02T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.219022 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.219079 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.219096 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.219117 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.219128 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:08Z","lastTransitionTime":"2025-12-02T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.322076 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.322123 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.322139 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.322159 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.322170 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:08Z","lastTransitionTime":"2025-12-02T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.426128 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.426168 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.426179 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.426196 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.426206 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:08Z","lastTransitionTime":"2025-12-02T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.529382 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.529457 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.529480 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.529505 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.529556 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:08Z","lastTransitionTime":"2025-12-02T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.632135 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.632186 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.632199 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.632225 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.632239 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:08Z","lastTransitionTime":"2025-12-02T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.735400 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.735454 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.735470 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.735492 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.735506 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:08Z","lastTransitionTime":"2025-12-02T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.838820 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.838880 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.838891 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.838917 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.838935 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:08Z","lastTransitionTime":"2025-12-02T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.942755 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.942814 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.942833 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.942860 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:08 crc kubenswrapper[4625]: I1202 13:45:08.942875 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:08Z","lastTransitionTime":"2025-12-02T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.045963 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.046008 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.046020 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.046035 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.046047 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:09Z","lastTransitionTime":"2025-12-02T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.149486 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.149586 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.149600 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.149620 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.149632 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:09Z","lastTransitionTime":"2025-12-02T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.253354 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.253420 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.253431 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.253455 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.253493 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:09Z","lastTransitionTime":"2025-12-02T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.356491 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.356550 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.356562 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.356587 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.356602 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:09Z","lastTransitionTime":"2025-12-02T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.458925 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.458967 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.458978 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.459032 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.459045 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:09Z","lastTransitionTime":"2025-12-02T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.561628 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.561678 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.561691 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.561712 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.561726 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:09Z","lastTransitionTime":"2025-12-02T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.663993 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.664042 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.664058 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.664078 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.664091 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:09Z","lastTransitionTime":"2025-12-02T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.767830 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.767883 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.767893 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.767914 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.767929 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:09Z","lastTransitionTime":"2025-12-02T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.855658 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.855720 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.855658 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.855787 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:09 crc kubenswrapper[4625]: E1202 13:45:09.855903 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:09 crc kubenswrapper[4625]: E1202 13:45:09.856139 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:09 crc kubenswrapper[4625]: E1202 13:45:09.856259 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:09 crc kubenswrapper[4625]: E1202 13:45:09.856741 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.870301 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.870429 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.870717 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.870735 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.870785 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.870801 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:09Z","lastTransitionTime":"2025-12-02T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.973601 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.973651 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.973664 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.973689 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:09 crc kubenswrapper[4625]: I1202 13:45:09.973706 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:09Z","lastTransitionTime":"2025-12-02T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.076573 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.076629 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.076640 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.076666 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.076682 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:10Z","lastTransitionTime":"2025-12-02T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.179798 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.179845 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.179860 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.179882 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.179895 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:10Z","lastTransitionTime":"2025-12-02T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.283145 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.283192 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.283205 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.283228 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.283240 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:10Z","lastTransitionTime":"2025-12-02T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.386913 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.386974 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.386985 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.387004 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.387013 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:10Z","lastTransitionTime":"2025-12-02T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.490676 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.490743 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.490786 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.490820 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.490845 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:10Z","lastTransitionTime":"2025-12-02T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.594519 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.594592 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.594602 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.594626 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.594639 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:10Z","lastTransitionTime":"2025-12-02T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.697589 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.697626 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.697637 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.697659 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.697673 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:10Z","lastTransitionTime":"2025-12-02T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.800797 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.800858 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.800869 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.800893 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.800905 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:10Z","lastTransitionTime":"2025-12-02T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.903691 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.903743 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.903758 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.903778 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:10 crc kubenswrapper[4625]: I1202 13:45:10.903791 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:10Z","lastTransitionTime":"2025-12-02T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.006548 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.006587 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.006595 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.006612 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.006621 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:11Z","lastTransitionTime":"2025-12-02T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.109651 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.109698 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.109712 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.109739 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.109753 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:11Z","lastTransitionTime":"2025-12-02T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.212718 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.212769 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.212782 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.212801 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.212815 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:11Z","lastTransitionTime":"2025-12-02T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.315680 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.315734 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.315745 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.315763 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.315774 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:11Z","lastTransitionTime":"2025-12-02T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.420411 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.420470 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.420559 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.420623 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.420640 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:11Z","lastTransitionTime":"2025-12-02T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.543277 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.543350 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.543368 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.543392 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.543406 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:11Z","lastTransitionTime":"2025-12-02T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.645523 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.645566 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.645577 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.645594 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.645603 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:11Z","lastTransitionTime":"2025-12-02T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.748575 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.748619 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.748629 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.748647 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.748657 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:11Z","lastTransitionTime":"2025-12-02T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.851503 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.851545 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.851555 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.851572 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.851582 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:11Z","lastTransitionTime":"2025-12-02T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.855794 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.855846 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:11 crc kubenswrapper[4625]: E1202 13:45:11.855927 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.855795 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:11 crc kubenswrapper[4625]: E1202 13:45:11.856039 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.856076 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:11 crc kubenswrapper[4625]: E1202 13:45:11.856170 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:11 crc kubenswrapper[4625]: E1202 13:45:11.856650 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.954754 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.954791 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.954801 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.954818 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:11 crc kubenswrapper[4625]: I1202 13:45:11.954831 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:11Z","lastTransitionTime":"2025-12-02T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.058244 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.058337 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.058357 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.058383 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.058412 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.161512 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.161563 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.161574 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.161604 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.161618 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.264914 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.264971 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.264985 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.265008 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.265021 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.368363 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.368409 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.368421 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.368439 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.368450 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.471568 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.471622 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.471632 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.471654 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.471665 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.574987 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.575027 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.575038 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.575060 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.575072 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.678215 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.678257 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.678267 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.678291 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.678329 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.781353 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.781427 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.781444 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.781472 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.781486 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.900652 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.900695 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.900709 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.900724 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.900735 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.901905 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.901962 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.901977 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.901997 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.902012 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: E1202 13:45:12.917089 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.922732 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.922807 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.922832 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.922872 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.922885 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: E1202 13:45:12.937524 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.941957 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.941995 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.942004 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.942019 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.942029 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: E1202 13:45:12.955359 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.963756 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.963823 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.963841 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.963866 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.963882 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: E1202 13:45:12.976349 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.980620 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.980668 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.980681 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.980703 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:12 crc kubenswrapper[4625]: I1202 13:45:12.980713 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:12Z","lastTransitionTime":"2025-12-02T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:12 crc kubenswrapper[4625]: E1202 13:45:12.993396 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:12Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:12 crc kubenswrapper[4625]: E1202 13:45:12.993516 4625 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.003232 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.003381 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.003485 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.003663 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.003758 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:13Z","lastTransitionTime":"2025-12-02T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.107599 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.107654 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.107668 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.107687 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.107697 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:13Z","lastTransitionTime":"2025-12-02T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.210336 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.210563 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.210579 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.210598 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.210614 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:13Z","lastTransitionTime":"2025-12-02T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.313006 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.313465 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.313566 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.313649 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.313713 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:13Z","lastTransitionTime":"2025-12-02T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.416350 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.416820 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.416905 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.416985 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.417107 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:13Z","lastTransitionTime":"2025-12-02T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.519968 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.520009 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.520021 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.520038 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.520048 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:13Z","lastTransitionTime":"2025-12-02T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.526403 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:13 crc kubenswrapper[4625]: E1202 13:45:13.526581 4625 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:45:13 crc kubenswrapper[4625]: E1202 13:45:13.526649 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs podName:23fa40dc-ba01-4997-bb3f-c9774637dc22 nodeName:}" failed. No retries permitted until 2025-12-02 13:45:45.52662981 +0000 UTC m=+101.488806985 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs") pod "network-metrics-daemon-x94k8" (UID: "23fa40dc-ba01-4997-bb3f-c9774637dc22") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.622433 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.622479 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.622494 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.622518 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.622533 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:13Z","lastTransitionTime":"2025-12-02T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.725475 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.725863 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.725949 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.726030 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.726098 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:13Z","lastTransitionTime":"2025-12-02T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.828593 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.828631 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.828640 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.828654 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.828663 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:13Z","lastTransitionTime":"2025-12-02T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.856014 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.856022 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.856109 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.856109 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:13 crc kubenswrapper[4625]: E1202 13:45:13.856676 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.856835 4625 scope.go:117] "RemoveContainer" containerID="9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb" Dec 02 13:45:13 crc kubenswrapper[4625]: E1202 13:45:13.856851 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:13 crc kubenswrapper[4625]: E1202 13:45:13.857038 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" Dec 02 13:45:13 crc kubenswrapper[4625]: E1202 13:45:13.857217 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:13 crc kubenswrapper[4625]: E1202 13:45:13.857196 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.931861 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.931920 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.931930 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.931949 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:13 crc kubenswrapper[4625]: I1202 13:45:13.931964 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:13Z","lastTransitionTime":"2025-12-02T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.034611 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.034654 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.034663 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.034681 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.034691 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:14Z","lastTransitionTime":"2025-12-02T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.137349 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.137415 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.137424 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.137460 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.137473 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:14Z","lastTransitionTime":"2025-12-02T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.239924 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.239979 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.240000 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.240030 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.240051 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:14Z","lastTransitionTime":"2025-12-02T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.343020 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.343075 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.343089 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.343106 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.343119 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:14Z","lastTransitionTime":"2025-12-02T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.447001 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.447047 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.447061 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.447082 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.447094 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:14Z","lastTransitionTime":"2025-12-02T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.549502 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.549544 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.549560 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.549577 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.549590 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:14Z","lastTransitionTime":"2025-12-02T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.651807 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.651878 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.651914 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.651939 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.651951 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:14Z","lastTransitionTime":"2025-12-02T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.754914 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.754957 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.754970 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.754993 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.755004 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:14Z","lastTransitionTime":"2025-12-02T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.857404 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.857480 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.857495 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.857518 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.857530 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:14Z","lastTransitionTime":"2025-12-02T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.876219 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.890574 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.903469 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.917920 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.942261 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:00Z\\\",\\\"message\\\":\\\" 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:45:00.960984 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:45:00.961032 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 13:45:00.961185 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:45:00.961190 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:45:00.961706 6188 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:45:00.961783 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:45:00.961795 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:45:00.961833 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:45:00.961832 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:45:00.961851 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:45:00.961884 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:45:00.961962 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:45:00.962048 6188 factory.go:656] Stopping watch factory\\\\nI1202 13:45:00.962062 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:45:00.962068 6188 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.955878 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b61692d-a173-459c-bac5-2f4e51c4d239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23c06e059fd4ff4a08b9aad36fa53b7d5b2abcc4ea6d5b6a2157ff5cd9302d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.961104 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.961346 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.961375 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.961434 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.961455 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:14Z","lastTransitionTime":"2025-12-02T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.970708 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:14 crc kubenswrapper[4625]: I1202 13:45:14.985873 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.000983 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:14Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.017788 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.035130 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.051401 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.065073 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.065148 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.065160 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.065180 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.065194 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:15Z","lastTransitionTime":"2025-12-02T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.067674 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.082497 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.100347 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.116741 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.134581 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.152197 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:15Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.168849 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.169192 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.169267 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.169364 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.169499 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:15Z","lastTransitionTime":"2025-12-02T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.273139 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.273488 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.273588 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.273668 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.273752 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:15Z","lastTransitionTime":"2025-12-02T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.377344 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.377399 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.377419 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.377449 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.377468 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:15Z","lastTransitionTime":"2025-12-02T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.480882 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.481363 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.481481 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.481596 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.481690 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:15Z","lastTransitionTime":"2025-12-02T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.585665 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.586490 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.586586 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.586681 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.586774 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:15Z","lastTransitionTime":"2025-12-02T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.690505 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.690545 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.690554 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.690573 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.690583 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:15Z","lastTransitionTime":"2025-12-02T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.792919 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.792959 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.792968 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.792984 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.792994 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:15Z","lastTransitionTime":"2025-12-02T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.855691 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:15 crc kubenswrapper[4625]: E1202 13:45:15.855895 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.856258 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:15 crc kubenswrapper[4625]: E1202 13:45:15.856368 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.856622 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.856661 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:15 crc kubenswrapper[4625]: E1202 13:45:15.856894 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:15 crc kubenswrapper[4625]: E1202 13:45:15.857097 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.895995 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.896511 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.896640 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.896765 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:15 crc kubenswrapper[4625]: I1202 13:45:15.896912 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:15Z","lastTransitionTime":"2025-12-02T13:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.000246 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.000301 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.000334 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.000360 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.000373 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:16Z","lastTransitionTime":"2025-12-02T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.103441 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.103511 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.103528 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.103552 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.103571 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:16Z","lastTransitionTime":"2025-12-02T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.206887 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.206948 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.206967 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.206990 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.207004 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:16Z","lastTransitionTime":"2025-12-02T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.310361 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.310414 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.310425 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.310446 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.310458 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:16Z","lastTransitionTime":"2025-12-02T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.413475 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.413559 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.413573 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.413595 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.413612 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:16Z","lastTransitionTime":"2025-12-02T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.516546 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.516704 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.516774 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.516800 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.516810 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:16Z","lastTransitionTime":"2025-12-02T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.619545 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.619611 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.619633 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.619660 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.619684 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:16Z","lastTransitionTime":"2025-12-02T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.722441 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.722490 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.722503 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.722520 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.722534 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:16Z","lastTransitionTime":"2025-12-02T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.826226 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.826271 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.826283 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.826322 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.826336 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:16Z","lastTransitionTime":"2025-12-02T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.928798 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.928842 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.928854 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.928872 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:16 crc kubenswrapper[4625]: I1202 13:45:16.928883 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:16Z","lastTransitionTime":"2025-12-02T13:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.031537 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.031592 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.031607 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.031624 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.031636 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:17Z","lastTransitionTime":"2025-12-02T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.134243 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.134282 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.134294 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.134335 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.134351 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:17Z","lastTransitionTime":"2025-12-02T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.236905 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.236945 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.236958 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.236973 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.236983 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:17Z","lastTransitionTime":"2025-12-02T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.338789 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.338836 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.338848 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.338865 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.338879 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:17Z","lastTransitionTime":"2025-12-02T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.456795 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.456832 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.456840 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.456854 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.457222 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:17Z","lastTransitionTime":"2025-12-02T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.560107 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.560154 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.560166 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.560183 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.560197 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:17Z","lastTransitionTime":"2025-12-02T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.663033 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.663096 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.663109 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.663126 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.663137 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:17Z","lastTransitionTime":"2025-12-02T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.766241 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.766293 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.766331 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.766353 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.766372 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:17Z","lastTransitionTime":"2025-12-02T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.855734 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.855818 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.855907 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:17 crc kubenswrapper[4625]: E1202 13:45:17.855945 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.856035 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:17 crc kubenswrapper[4625]: E1202 13:45:17.856137 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:17 crc kubenswrapper[4625]: E1202 13:45:17.856223 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:17 crc kubenswrapper[4625]: E1202 13:45:17.856352 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.869069 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.869131 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.869144 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.869172 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.869184 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:17Z","lastTransitionTime":"2025-12-02T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.972888 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.972958 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.972969 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.972990 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:17 crc kubenswrapper[4625]: I1202 13:45:17.973000 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:17Z","lastTransitionTime":"2025-12-02T13:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.076039 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.076095 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.076108 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.076127 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.076140 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:18Z","lastTransitionTime":"2025-12-02T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.178740 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.178775 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.178783 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.178797 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.178806 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:18Z","lastTransitionTime":"2025-12-02T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.281500 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.281541 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.281552 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.281570 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.281583 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:18Z","lastTransitionTime":"2025-12-02T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.384365 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.384416 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.384429 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.384448 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.384459 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:18Z","lastTransitionTime":"2025-12-02T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.486815 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.486862 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.486870 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.486884 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.486895 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:18Z","lastTransitionTime":"2025-12-02T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.589458 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.589499 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.589508 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.589526 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.589536 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:18Z","lastTransitionTime":"2025-12-02T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.692506 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.692553 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.692564 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.692585 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.692597 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:18Z","lastTransitionTime":"2025-12-02T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.795847 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.795938 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.795970 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.796006 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.796032 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:18Z","lastTransitionTime":"2025-12-02T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.898820 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.898882 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.898898 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.898919 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:18 crc kubenswrapper[4625]: I1202 13:45:18.898933 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:18Z","lastTransitionTime":"2025-12-02T13:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.002522 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.002571 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.002588 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.002611 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.002629 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:19Z","lastTransitionTime":"2025-12-02T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.105795 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.105860 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.105887 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.105917 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.105942 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:19Z","lastTransitionTime":"2025-12-02T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.208884 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.208962 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.208984 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.209012 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.209036 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:19Z","lastTransitionTime":"2025-12-02T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.312204 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.312253 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.312277 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.312328 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.312346 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:19Z","lastTransitionTime":"2025-12-02T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.415609 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.415652 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.415666 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.415684 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.415696 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:19Z","lastTransitionTime":"2025-12-02T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.518544 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.518589 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.518600 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.518615 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.518628 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:19Z","lastTransitionTime":"2025-12-02T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.621561 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.621608 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.621616 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.621632 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.621642 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:19Z","lastTransitionTime":"2025-12-02T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.636705 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lnf62_dd11bfd3-e3e2-47ac-8354-30dd684045dc/kube-multus/0.log" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.636791 4625 generic.go:334] "Generic (PLEG): container finished" podID="dd11bfd3-e3e2-47ac-8354-30dd684045dc" containerID="407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2" exitCode=1 Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.636842 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lnf62" event={"ID":"dd11bfd3-e3e2-47ac-8354-30dd684045dc","Type":"ContainerDied","Data":"407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.637434 4625 scope.go:117] "RemoveContainer" containerID="407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.649663 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.662246 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:18Z\\\",\\\"message\\\":\\\"2025-12-02T13:44:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748\\\\n2025-12-02T13:44:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748 to /host/opt/cni/bin/\\\\n2025-12-02T13:44:33Z [verbose] multus-daemon started\\\\n2025-12-02T13:44:33Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:45:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.678139 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.694380 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.705393 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.715076 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.724783 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.724837 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.724847 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.724863 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.724876 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:19Z","lastTransitionTime":"2025-12-02T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.734781 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.749428 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.767496 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.781221 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.795289 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.806035 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.824125 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:00Z\\\",\\\"message\\\":\\\" 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:45:00.960984 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:45:00.961032 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 13:45:00.961185 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:45:00.961190 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:45:00.961706 6188 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:45:00.961783 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:45:00.961795 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:45:00.961833 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:45:00.961832 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:45:00.961851 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:45:00.961884 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:45:00.961962 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:45:00.962048 6188 factory.go:656] Stopping watch factory\\\\nI1202 13:45:00.962062 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:45:00.962068 6188 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.827647 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.827675 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.827686 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.827699 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.827709 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:19Z","lastTransitionTime":"2025-12-02T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.835584 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b61692d-a173-459c-bac5-2f4e51c4d239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23c06e059fd4ff4a08b9aad36fa53b7d5b2abcc4ea6d5b6a2157ff5cd9302d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.855171 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.855207 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.855180 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.855634 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:19 crc kubenswrapper[4625]: E1202 13:45:19.855754 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:19 crc kubenswrapper[4625]: E1202 13:45:19.855960 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:19 crc kubenswrapper[4625]: E1202 13:45:19.856047 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:19 crc kubenswrapper[4625]: E1202 13:45:19.856152 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.856740 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.869158 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.883758 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.896625 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:19Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.930378 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.930409 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.930425 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.930442 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:19 crc kubenswrapper[4625]: I1202 13:45:19.930452 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:19Z","lastTransitionTime":"2025-12-02T13:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.033118 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.033180 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.033198 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.033223 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.033241 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:20Z","lastTransitionTime":"2025-12-02T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.137945 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.137977 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.137986 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.137998 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.138007 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:20Z","lastTransitionTime":"2025-12-02T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.246047 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.246108 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.246127 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.246152 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.246170 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:20Z","lastTransitionTime":"2025-12-02T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.385405 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.385443 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.385453 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.385466 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.385476 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:20Z","lastTransitionTime":"2025-12-02T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.487947 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.487982 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.487994 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.488016 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.488030 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:20Z","lastTransitionTime":"2025-12-02T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.590859 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.590895 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.590907 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.590926 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.590940 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:20Z","lastTransitionTime":"2025-12-02T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.642294 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lnf62_dd11bfd3-e3e2-47ac-8354-30dd684045dc/kube-multus/0.log" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.642388 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lnf62" event={"ID":"dd11bfd3-e3e2-47ac-8354-30dd684045dc","Type":"ContainerStarted","Data":"507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.658456 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.668688 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.679126 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.688223 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.693570 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.693621 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.693633 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.693652 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.693667 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:20Z","lastTransitionTime":"2025-12-02T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.707590 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:00Z\\\",\\\"message\\\":\\\" 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:45:00.960984 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:45:00.961032 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 13:45:00.961185 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:45:00.961190 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:45:00.961706 6188 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:45:00.961783 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:45:00.961795 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:45:00.961833 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:45:00.961832 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:45:00.961851 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:45:00.961884 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:45:00.961962 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:45:00.962048 6188 factory.go:656] Stopping watch factory\\\\nI1202 13:45:00.962062 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:45:00.962068 6188 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.722007 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b61692d-a173-459c-bac5-2f4e51c4d239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23c06e059fd4ff4a08b9aad36fa53b7d5b2abcc4ea6d5b6a2157ff5cd9302d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.735177 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.746983 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.762382 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.776062 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.797416 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.797764 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.797780 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.797804 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.797824 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:20Z","lastTransitionTime":"2025-12-02T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.811061 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.834302 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.864610 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.879779 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.894461 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.900705 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.900778 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.900791 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.900810 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.900823 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:20Z","lastTransitionTime":"2025-12-02T13:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.905748 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.919750 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:20 crc kubenswrapper[4625]: I1202 13:45:20.970138 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:18Z\\\",\\\"message\\\":\\\"2025-12-02T13:44:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748\\\\n2025-12-02T13:44:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748 to /host/opt/cni/bin/\\\\n2025-12-02T13:44:33Z [verbose] multus-daemon started\\\\n2025-12-02T13:44:33Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:45:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:20Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.003406 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.003451 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.003462 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.003478 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.003490 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:21Z","lastTransitionTime":"2025-12-02T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.105793 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.105829 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.105839 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.105852 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.105863 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:21Z","lastTransitionTime":"2025-12-02T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.208426 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.208463 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.208471 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.208486 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.208497 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:21Z","lastTransitionTime":"2025-12-02T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.311758 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.312143 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.312208 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.312291 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.312369 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:21Z","lastTransitionTime":"2025-12-02T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.415097 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.415153 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.415163 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.415178 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.415189 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:21Z","lastTransitionTime":"2025-12-02T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.518204 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.518252 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.518266 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.518282 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.518294 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:21Z","lastTransitionTime":"2025-12-02T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.622289 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.622368 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.622381 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.622405 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.622416 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:21Z","lastTransitionTime":"2025-12-02T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.726050 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.726113 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.726128 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.726154 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.726171 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:21Z","lastTransitionTime":"2025-12-02T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.829233 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.829271 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.829282 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.829297 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.829325 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:21Z","lastTransitionTime":"2025-12-02T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.855993 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:21 crc kubenswrapper[4625]: E1202 13:45:21.856183 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.856445 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.856514 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.856753 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:21 crc kubenswrapper[4625]: E1202 13:45:21.856786 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:21 crc kubenswrapper[4625]: E1202 13:45:21.856837 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:21 crc kubenswrapper[4625]: E1202 13:45:21.856885 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.932974 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.933028 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.933041 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.933063 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:21 crc kubenswrapper[4625]: I1202 13:45:21.933076 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:21Z","lastTransitionTime":"2025-12-02T13:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.035303 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.035366 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.035378 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.035398 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.035410 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:22Z","lastTransitionTime":"2025-12-02T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.138191 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.138229 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.138240 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.138254 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.138265 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:22Z","lastTransitionTime":"2025-12-02T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.240381 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.240422 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.240434 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.240451 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.240463 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:22Z","lastTransitionTime":"2025-12-02T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.342481 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.342556 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.342578 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.342629 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.342656 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:22Z","lastTransitionTime":"2025-12-02T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.445651 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.445685 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.445694 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.445709 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.445718 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:22Z","lastTransitionTime":"2025-12-02T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.548624 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.548662 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.548676 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.548692 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.548704 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:22Z","lastTransitionTime":"2025-12-02T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.650947 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.651004 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.651023 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.651047 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.651103 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:22Z","lastTransitionTime":"2025-12-02T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.754435 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.754506 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.754519 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.754538 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.754552 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:22Z","lastTransitionTime":"2025-12-02T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.857180 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.857231 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.857247 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.857272 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.857290 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:22Z","lastTransitionTime":"2025-12-02T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.959916 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.959980 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.959998 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.960026 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:22 crc kubenswrapper[4625]: I1202 13:45:22.960044 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:22Z","lastTransitionTime":"2025-12-02T13:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.063958 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.063988 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.064000 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.064016 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.064028 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.166769 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.166986 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.167148 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.167231 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.167299 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.270387 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.270444 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.270462 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.270492 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.270512 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.325546 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.325585 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.325596 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.325613 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.325623 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: E1202 13:45:23.341463 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:23Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.345478 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.345532 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.345552 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.345576 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.345592 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: E1202 13:45:23.359071 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:23Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.364097 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.364139 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.364150 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.364177 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.364190 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: E1202 13:45:23.382870 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:23Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.387411 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.387474 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.387488 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.387513 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.387526 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: E1202 13:45:23.400021 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:23Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.404377 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.404409 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.404423 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.404445 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.404459 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: E1202 13:45:23.416883 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:23Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:23 crc kubenswrapper[4625]: E1202 13:45:23.417005 4625 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.418466 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.418490 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.418499 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.418515 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.418526 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.521737 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.521816 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.521831 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.521850 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.521864 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.624724 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.624752 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.624760 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.624773 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.624783 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.730261 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.730347 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.730361 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.730380 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.730393 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.833905 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.834693 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.834712 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.834733 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.834744 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.855561 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.855609 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.855635 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.855613 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:23 crc kubenswrapper[4625]: E1202 13:45:23.855768 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:23 crc kubenswrapper[4625]: E1202 13:45:23.856075 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:23 crc kubenswrapper[4625]: E1202 13:45:23.856219 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:23 crc kubenswrapper[4625]: E1202 13:45:23.856278 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.938435 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.938495 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.938512 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.938536 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:23 crc kubenswrapper[4625]: I1202 13:45:23.938551 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:23Z","lastTransitionTime":"2025-12-02T13:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.042974 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.043243 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.043332 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.043399 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.043412 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:24Z","lastTransitionTime":"2025-12-02T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.145938 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.145988 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.146007 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.146031 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.146049 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:24Z","lastTransitionTime":"2025-12-02T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.248421 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.248524 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.248544 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.248568 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.248587 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:24Z","lastTransitionTime":"2025-12-02T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.351983 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.352041 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.352058 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.352078 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.352092 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:24Z","lastTransitionTime":"2025-12-02T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.454996 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.455126 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.455152 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.455193 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.455218 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:24Z","lastTransitionTime":"2025-12-02T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.558076 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.558137 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.558161 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.558193 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.558218 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:24Z","lastTransitionTime":"2025-12-02T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.661281 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.661366 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.661384 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.661412 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.661437 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:24Z","lastTransitionTime":"2025-12-02T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.764014 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.764059 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.764070 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.764085 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.764097 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:24Z","lastTransitionTime":"2025-12-02T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.866067 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.866143 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.866214 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.866245 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.866266 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:24Z","lastTransitionTime":"2025-12-02T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.881928 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.891989 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.904705 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.916058 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.925641 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.935014 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.953618 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:00Z\\\",\\\"message\\\":\\\" 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:45:00.960984 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:45:00.961032 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 13:45:00.961185 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:45:00.961190 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:45:00.961706 6188 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:45:00.961783 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:45:00.961795 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:45:00.961833 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:45:00.961832 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:45:00.961851 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:45:00.961884 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:45:00.961962 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:45:00.962048 6188 factory.go:656] Stopping watch factory\\\\nI1202 13:45:00.962062 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:45:00.962068 6188 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.964179 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b61692d-a173-459c-bac5-2f4e51c4d239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23c06e059fd4ff4a08b9aad36fa53b7d5b2abcc4ea6d5b6a2157ff5cd9302d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.968930 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.968965 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.968977 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.968996 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.969009 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:24Z","lastTransitionTime":"2025-12-02T13:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.977427 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:24 crc kubenswrapper[4625]: I1202 13:45:24.990994 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:24Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.003168 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.017364 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.031334 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.048607 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.060412 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.071182 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.071208 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.071216 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.071229 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.071238 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:25Z","lastTransitionTime":"2025-12-02T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.074816 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.088990 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.105691 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:18Z\\\",\\\"message\\\":\\\"2025-12-02T13:44:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748\\\\n2025-12-02T13:44:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748 to /host/opt/cni/bin/\\\\n2025-12-02T13:44:33Z [verbose] multus-daemon started\\\\n2025-12-02T13:44:33Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:45:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:25Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.174029 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.174057 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.174066 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.174081 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.174090 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:25Z","lastTransitionTime":"2025-12-02T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.276830 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.276895 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.276916 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.276935 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.276948 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:25Z","lastTransitionTime":"2025-12-02T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.380200 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.380254 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.380264 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.380282 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.380292 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:25Z","lastTransitionTime":"2025-12-02T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.483136 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.483172 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.483181 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.483193 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.483202 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:25Z","lastTransitionTime":"2025-12-02T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.586412 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.586453 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.586464 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.586480 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.586492 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:25Z","lastTransitionTime":"2025-12-02T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.689551 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.689689 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.689715 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.689763 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.689784 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:25Z","lastTransitionTime":"2025-12-02T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.792614 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.792660 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.792675 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.792697 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.792711 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:25Z","lastTransitionTime":"2025-12-02T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.855079 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:25 crc kubenswrapper[4625]: E1202 13:45:25.855265 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.855612 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.855648 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:25 crc kubenswrapper[4625]: E1202 13:45:25.855795 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.855844 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:25 crc kubenswrapper[4625]: E1202 13:45:25.856029 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:25 crc kubenswrapper[4625]: E1202 13:45:25.856112 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.895490 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.895569 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.895607 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.895638 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.895661 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:25Z","lastTransitionTime":"2025-12-02T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.998162 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.998214 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.998225 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.998239 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:25 crc kubenswrapper[4625]: I1202 13:45:25.998248 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:25Z","lastTransitionTime":"2025-12-02T13:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.100847 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.100911 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.100923 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.100941 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.100952 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:26Z","lastTransitionTime":"2025-12-02T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.204166 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.204243 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.204266 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.204296 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.204353 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:26Z","lastTransitionTime":"2025-12-02T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.307251 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.307351 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.307390 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.307427 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.307450 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:26Z","lastTransitionTime":"2025-12-02T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.410999 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.411080 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.411105 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.411139 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.411163 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:26Z","lastTransitionTime":"2025-12-02T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.513405 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.513458 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.513467 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.513486 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.513497 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:26Z","lastTransitionTime":"2025-12-02T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.616222 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.616253 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.616261 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.616274 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.616285 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:26Z","lastTransitionTime":"2025-12-02T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.718040 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.718076 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.718086 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.718099 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.718108 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:26Z","lastTransitionTime":"2025-12-02T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.820553 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.820593 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.820602 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.820616 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.820628 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:26Z","lastTransitionTime":"2025-12-02T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.856844 4625 scope.go:117] "RemoveContainer" containerID="9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.922559 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.922599 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.922612 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.922629 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:26 crc kubenswrapper[4625]: I1202 13:45:26.922641 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:26Z","lastTransitionTime":"2025-12-02T13:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.025903 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.025955 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.025970 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.025992 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.026007 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:27Z","lastTransitionTime":"2025-12-02T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.130059 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.130170 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.130191 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.130250 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.130269 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:27Z","lastTransitionTime":"2025-12-02T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.232933 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.233007 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.233036 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.233068 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.233094 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:27Z","lastTransitionTime":"2025-12-02T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.336107 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.336169 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.336183 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.336205 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.336218 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:27Z","lastTransitionTime":"2025-12-02T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.438667 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.438692 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.438700 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.438712 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.438721 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:27Z","lastTransitionTime":"2025-12-02T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.541382 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.541436 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.541449 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.541467 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.541479 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:27Z","lastTransitionTime":"2025-12-02T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.653191 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.653238 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.653249 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.653263 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.653275 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:27Z","lastTransitionTime":"2025-12-02T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.663129 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/2.log" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.664467 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861"} Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.665418 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.677574 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.689429 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.700210 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.716369 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:00Z\\\",\\\"message\\\":\\\" 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:45:00.960984 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:45:00.961032 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 13:45:00.961185 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:45:00.961190 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:45:00.961706 6188 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:45:00.961783 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:45:00.961795 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:45:00.961833 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:45:00.961832 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:45:00.961851 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:45:00.961884 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:45:00.961962 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:45:00.962048 6188 factory.go:656] Stopping watch factory\\\\nI1202 13:45:00.962062 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:45:00.962068 6188 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:45:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.726328 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b61692d-a173-459c-bac5-2f4e51c4d239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23c06e059fd4ff4a08b9aad36fa53b7d5b2abcc4ea6d5b6a2157ff5cd9302d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.736152 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.751147 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.755071 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.755111 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.755123 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.755139 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.755149 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:27Z","lastTransitionTime":"2025-12-02T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.769705 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.783089 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.795565 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.806772 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.840362 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.852055 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.860152 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.870331 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:18Z\\\",\\\"message\\\":\\\"2025-12-02T13:44:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748\\\\n2025-12-02T13:44:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748 to /host/opt/cni/bin/\\\\n2025-12-02T13:44:33Z [verbose] multus-daemon started\\\\n2025-12-02T13:44:33Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:45:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.881694 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.889843 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.900430 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:27Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.909242 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.909267 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.909239 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:27 crc kubenswrapper[4625]: E1202 13:45:27.909374 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.909449 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:27 crc kubenswrapper[4625]: E1202 13:45:27.909608 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:27 crc kubenswrapper[4625]: E1202 13:45:27.909841 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:27 crc kubenswrapper[4625]: E1202 13:45:27.909912 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.911557 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.911586 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.911596 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.911608 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:27 crc kubenswrapper[4625]: I1202 13:45:27.911617 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:27Z","lastTransitionTime":"2025-12-02T13:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.014058 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.014670 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.014732 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.014799 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.014890 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:28Z","lastTransitionTime":"2025-12-02T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.117792 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.117842 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.117852 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.117876 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.117889 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:28Z","lastTransitionTime":"2025-12-02T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.220114 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.220401 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.220409 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.220424 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.220433 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:28Z","lastTransitionTime":"2025-12-02T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.322562 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.322619 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.322636 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.322659 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.322678 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:28Z","lastTransitionTime":"2025-12-02T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.426070 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.426126 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.426143 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.426166 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.426183 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:28Z","lastTransitionTime":"2025-12-02T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.529374 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.529852 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.530037 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.530235 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.530470 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:28Z","lastTransitionTime":"2025-12-02T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.633932 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.634244 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.634360 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.634470 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.634556 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:28Z","lastTransitionTime":"2025-12-02T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.737153 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.737226 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.737238 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.737280 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.737295 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:28Z","lastTransitionTime":"2025-12-02T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.839879 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.839951 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.839970 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.839998 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.840020 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:28Z","lastTransitionTime":"2025-12-02T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.943213 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.943295 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.943355 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.943386 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:28 crc kubenswrapper[4625]: I1202 13:45:28.943410 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:28Z","lastTransitionTime":"2025-12-02T13:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.046702 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.046785 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.046811 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.046845 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.046869 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:29Z","lastTransitionTime":"2025-12-02T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.150417 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.150507 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.150528 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.150554 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.150573 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:29Z","lastTransitionTime":"2025-12-02T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.253372 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.253416 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.253427 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.253444 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.253459 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:29Z","lastTransitionTime":"2025-12-02T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.356213 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.356254 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.356266 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.356280 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.356291 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:29Z","lastTransitionTime":"2025-12-02T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.458200 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.458231 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.458241 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.458253 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.458263 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:29Z","lastTransitionTime":"2025-12-02T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.561644 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.561726 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.561747 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.561772 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.561790 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:29Z","lastTransitionTime":"2025-12-02T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.665086 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.665173 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.665198 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.665229 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.665252 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:29Z","lastTransitionTime":"2025-12-02T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.768633 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.768690 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.768707 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.768733 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.768750 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:29Z","lastTransitionTime":"2025-12-02T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.810583 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.810710 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.810724 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:33.810695922 +0000 UTC m=+149.772873027 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.810780 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.810836 4625 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.810899 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:46:33.810885007 +0000 UTC m=+149.773062112 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.810901 4625 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.810845 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.810990 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-02 13:46:33.810963409 +0000 UTC m=+149.773140524 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.811198 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.811267 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.811284 4625 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.811384 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-02 13:46:33.811358769 +0000 UTC m=+149.773535834 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.856043 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.856127 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.856064 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.856064 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.856340 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.856556 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.856714 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.856825 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.872250 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.872349 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.872390 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.872423 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.872447 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:29Z","lastTransitionTime":"2025-12-02T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.912678 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.912924 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.912963 4625 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.912984 4625 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:45:29 crc kubenswrapper[4625]: E1202 13:45:29.913055 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-02 13:46:33.913032581 +0000 UTC m=+149.875209696 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.976247 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.976338 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.976356 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.976381 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:29 crc kubenswrapper[4625]: I1202 13:45:29.976401 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:29Z","lastTransitionTime":"2025-12-02T13:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.079928 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.079996 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.080013 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.080038 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.080058 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:30Z","lastTransitionTime":"2025-12-02T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.183117 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.183176 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.183192 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.183217 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.183238 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:30Z","lastTransitionTime":"2025-12-02T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.285975 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.286039 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.286060 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.286086 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.286104 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:30Z","lastTransitionTime":"2025-12-02T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.389263 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.389335 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.389351 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.389368 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.389381 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:30Z","lastTransitionTime":"2025-12-02T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.491803 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.491902 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.491920 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.491942 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.491960 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:30Z","lastTransitionTime":"2025-12-02T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.594238 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.594296 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.594361 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.594383 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.594397 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:30Z","lastTransitionTime":"2025-12-02T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.697139 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.697201 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.697212 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.697233 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.697246 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:30Z","lastTransitionTime":"2025-12-02T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.799670 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.799948 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.800013 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.800109 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:30 crc kubenswrapper[4625]: I1202 13:45:30.800191 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:30Z","lastTransitionTime":"2025-12-02T13:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.800030 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.802392 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:31 crc kubenswrapper[4625]: E1202 13:45:31.802512 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:31 crc kubenswrapper[4625]: E1202 13:45:31.803161 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.803505 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.803576 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:31 crc kubenswrapper[4625]: E1202 13:45:31.803662 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:31 crc kubenswrapper[4625]: E1202 13:45:31.803771 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.805940 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.805986 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.806001 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.806020 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.806035 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:31Z","lastTransitionTime":"2025-12-02T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.909654 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.909744 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.909762 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.909790 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:31 crc kubenswrapper[4625]: I1202 13:45:31.909803 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:31Z","lastTransitionTime":"2025-12-02T13:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.013004 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.013059 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.013078 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.013104 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.013123 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:32Z","lastTransitionTime":"2025-12-02T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.115558 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.115622 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.115638 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.115662 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.115683 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:32Z","lastTransitionTime":"2025-12-02T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.189475 4625 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 3.490799815s: [/var/lib/containers/storage/overlay/e2c2de86f197e51990c673ae5861f880b1e63999a14ae1848aa89e0a2abc6e54/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovn-acl-logging/0.log]; will not log again for this container unless duration exceeds 2s Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.190251 4625 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.655519408s: [/var/lib/containers/storage/overlay/abbec1149f6be5ea8eedd8dea466b6c12b6f21c7131a12a7aac087179c11706f/diff /var/log/pods/openshift-image-registry_node-ca-gnnxh_98490ada-9405-4703-8fef-4211d5b99400/node-ca/0.log]; will not log again for this container unless duration exceeds 2s Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.190620 4625 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.683576068s: [/var/lib/containers/storage/overlay/0fbee74f2e8e24f792a091f8b92980c239c244dfd870f8201092928d0557991f/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/northd/0.log]; will not log again for this container unless duration exceeds 2s Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.218898 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.218961 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.218980 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.219005 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.219022 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:32Z","lastTransitionTime":"2025-12-02T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.323188 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.323944 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.324185 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.324461 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.324691 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:32Z","lastTransitionTime":"2025-12-02T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.427806 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.427837 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.427846 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.427859 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.427869 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:32Z","lastTransitionTime":"2025-12-02T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.530584 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.530823 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.530889 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.531073 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.531159 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:32Z","lastTransitionTime":"2025-12-02T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.634232 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.634291 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.634357 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.634388 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.634410 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:32Z","lastTransitionTime":"2025-12-02T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.737409 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.737464 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.737483 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.737509 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.737528 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:32Z","lastTransitionTime":"2025-12-02T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.810774 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/3.log" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.811246 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/2.log" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.814051 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861" exitCode=1 Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.814094 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.814132 4625 scope.go:117] "RemoveContainer" containerID="9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.816029 4625 scope.go:117] "RemoveContainer" containerID="0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861" Dec 02 13:45:32 crc kubenswrapper[4625]: E1202 13:45:32.816388 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.832780 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.840783 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.840850 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.840864 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.840880 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.840916 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:32Z","lastTransitionTime":"2025-12-02T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.847736 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.856043 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:32 crc kubenswrapper[4625]: E1202 13:45:32.856193 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.863597 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.878004 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:18Z\\\",\\\"message\\\":\\\"2025-12-02T13:44:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748\\\\n2025-12-02T13:44:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748 to /host/opt/cni/bin/\\\\n2025-12-02T13:44:33Z [verbose] multus-daemon started\\\\n2025-12-02T13:44:33Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:45:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.894835 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.906343 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.920934 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.943998 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.944045 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.944059 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.944078 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.944091 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:32Z","lastTransitionTime":"2025-12-02T13:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.945801 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:00Z\\\",\\\"message\\\":\\\" 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:45:00.960984 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:45:00.961032 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 13:45:00.961185 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:45:00.961190 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:45:00.961706 6188 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:45:00.961783 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:45:00.961795 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:45:00.961833 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:45:00.961832 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:45:00.961851 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:45:00.961884 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:45:00.961962 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:45:00.962048 6188 factory.go:656] Stopping watch factory\\\\nI1202 13:45:00.962062 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:45:00.962068 6188 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:32Z\\\",\\\"message\\\":\\\"Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:45:28.310804 6528 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:45:28.311154 6528 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:45:28.311267 6528 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:45:28.311628 6528 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:45:28.311858 6528 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:45:28.323105 6528 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1202 13:45:28.323127 6528 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1202 13:45:28.323171 6528 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:28.323195 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 13:45:28.323400 6528 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:45:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.958590 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b61692d-a173-459c-bac5-2f4e51c4d239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23c06e059fd4ff4a08b9aad36fa53b7d5b2abcc4ea6d5b6a2157ff5cd9302d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.975398 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:32 crc kubenswrapper[4625]: I1202 13:45:32.988267 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.000742 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:32Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.014366 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.024637 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.035817 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.046957 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.047009 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.047025 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.047048 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.047063 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.051017 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.064039 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.076289 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.150118 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.150177 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.150194 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.150214 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.150228 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.253690 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.253739 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.253757 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.253785 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.253799 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.357534 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.357584 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.357597 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.357616 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.357638 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.460609 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.461015 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.461078 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.461187 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.461258 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.564581 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.565113 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.565233 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.565413 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.565533 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.669793 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.669860 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.669876 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.669893 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.669907 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.725653 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.726137 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.726244 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.726383 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.726479 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: E1202 13:45:33.744275 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.749417 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.749581 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.749696 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.749794 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.749905 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: E1202 13:45:33.765346 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.769788 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.769834 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.769849 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.769871 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.769884 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: E1202 13:45:33.785386 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.789786 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.789942 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.790062 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.790158 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.790245 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: E1202 13:45:33.805266 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.810072 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.810120 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.810133 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.810152 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.810168 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.819230 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/3.log" Dec 02 13:45:33 crc kubenswrapper[4625]: E1202 13:45:33.825047 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d1deca0-bc51-433c-8d69-fdb0e1fb8ace\\\",\\\"systemUUID\\\":\\\"718d7937-78fb-44b3-9ae0-1d312b093168\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:33Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:33 crc kubenswrapper[4625]: E1202 13:45:33.825670 4625 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.832793 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.832863 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.832876 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.832896 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.832918 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.855037 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.855178 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:33 crc kubenswrapper[4625]: E1202 13:45:33.855171 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:33 crc kubenswrapper[4625]: E1202 13:45:33.855326 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.855611 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:33 crc kubenswrapper[4625]: E1202 13:45:33.855934 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.935962 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.936016 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.936029 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.936043 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:33 crc kubenswrapper[4625]: I1202 13:45:33.936054 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:33Z","lastTransitionTime":"2025-12-02T13:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.040708 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.040806 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.040829 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.040905 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.040949 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:34Z","lastTransitionTime":"2025-12-02T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.144414 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.144494 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.144510 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.144536 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.144554 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:34Z","lastTransitionTime":"2025-12-02T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.248299 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.248361 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.248372 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.248390 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.248403 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:34Z","lastTransitionTime":"2025-12-02T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.352535 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.353026 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.353098 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.353510 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.353598 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:34Z","lastTransitionTime":"2025-12-02T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.455996 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.456277 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.456369 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.456468 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.456550 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:34Z","lastTransitionTime":"2025-12-02T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.559396 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.559450 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.559462 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.559480 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.559491 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:34Z","lastTransitionTime":"2025-12-02T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.661687 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.661724 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.661732 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.661750 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.661761 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:34Z","lastTransitionTime":"2025-12-02T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.764094 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.764689 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.764772 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.764853 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.764944 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:34Z","lastTransitionTime":"2025-12-02T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.855566 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:34 crc kubenswrapper[4625]: E1202 13:45:34.855743 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.867893 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.868365 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.868481 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.868572 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.868660 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:34Z","lastTransitionTime":"2025-12-02T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.875808 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b75e9cdc3ac0265693583e323714f3388e4f6682ce14d966eb8e6bbe9dbde29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://336591d601cecb31d7edd3106104550ecc4554ee03efa8ec1764d830147c29f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.891678 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.905457 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fb167ef-23b4-4c65-bd65-a0219101b109\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddbedc8bbb1a5dc53fa84e5417effc2aaec33531912c25353634b12b40d3bde7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f32ab8f39a66a4d28b674070ce7fa0927906926d5bbb18498423c67ab1d8e57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5p2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-cw895\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.922663 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lnf62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd11bfd3-e3e2-47ac-8354-30dd684045dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:45:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:18Z\\\",\\\"message\\\":\\\"2025-12-02T13:44:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748\\\\n2025-12-02T13:44:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fd8531b1-f27e-4196-9889-b1e12938c748 to /host/opt/cni/bin/\\\\n2025-12-02T13:44:33Z [verbose] multus-daemon started\\\\n2025-12-02T13:44:33Z [verbose] Readiness Indicator file check\\\\n2025-12-02T13:45:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:45:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-224t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lnf62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.937996 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce86a1bb-e2cd-4867-bf4e-297c2ff9f307\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-02T13:44:19Z\\\",\\\"message\\\":\\\"W1202 13:44:08.842903 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1202 13:44:08.843639 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764683048 cert, and key in /tmp/serving-cert-54538961/serving-signer.crt, /tmp/serving-cert-54538961/serving-signer.key\\\\nI1202 13:44:09.135199 1 observer_polling.go:159] Starting file observer\\\\nW1202 13:44:09.136283 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1202 13:44:09.136463 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1202 13:44:09.137688 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-54538961/tls.crt::/tmp/serving-cert-54538961/tls.key\\\\\\\"\\\\nF1202 13:44:19.524932 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.952566 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.967177 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d911ea35-69e2-4943-999e-389a961ce243\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e11876f6fd32ff3f1dd01ed429a457ff234ff6d13ee6a189485444bd0c76a1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdr42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c6d9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.972197 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.972255 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.972272 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.972292 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.972327 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:34Z","lastTransitionTime":"2025-12-02T13:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:34 crc kubenswrapper[4625]: I1202 13:45:34.982775 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-x94k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23fa40dc-ba01-4997-bb3f-c9774637dc22\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v8jh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-x94k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.000417 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4njgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3810fa9-85cb-4c38-a835-57f56463ff66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959cb95f1e47498a6e087899c2c970050337fd007a559c1a0ce9c2705f2fcb47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c047eba44fa37341ac7f869817f69e2f60d91d882d70f6f62ba73ac497082a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2515a748ef41924c60dec39463a146c878d6fc473d60747df9005e4a8f9dc9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3607e131a8ce16e7bf87480fd12e6b13b28ecc31668f5b5bc6c8b01c4974a83f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8147c8ca794a3d6ae9f84531b62c2115686cd10da009a3bc665492511e8ce987\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052ba1fb83ec67d0c943b84f0b1effa40067b8e34d06ec905e9316e7f306b7a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3b5892a44b2ebb7c5821c1b064507bcd4308049abe3fcaf7eaab5858b0c2f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjdqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4njgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:34Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.013088 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnnxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98490ada-9405-4703-8fef-4211d5b99400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ee82add773f6d6d94141de36ba0397a62d9db8e9d1b5089fca2700da9782a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c8mkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnnxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.028220 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4fbdc7c17d42ada8a51e6e390e4f13c9f4ec918299108e019e57535b249851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.047066 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.059245 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afe447cd58faf13cc043b014d27606563a323efb6b10e411d64df7b0d0df5415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.069879 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nqfkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"815210e5-991f-4471-b687-6565a8751ba3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b246e7301e7e44fba33580a1240d96891cb09b68a9ee74b5063a649e9fa96359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46c2g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nqfkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.082012 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.082059 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.082072 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.082090 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.082102 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:35Z","lastTransitionTime":"2025-12-02T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.089621 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df437b8d-61b5-41ea-8f56-d5472e444b23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ea64d23b53142e5872513bb4b326a9f67bd6ac997d8780eeb53ae535ea37efb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:00Z\\\",\\\"message\\\":\\\" 6188 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1202 13:45:00.960984 6188 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1202 13:45:00.961032 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1202 13:45:00.961185 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI1202 13:45:00.961190 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1202 13:45:00.961706 6188 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1202 13:45:00.961783 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1202 13:45:00.961795 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1202 13:45:00.961833 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1202 13:45:00.961832 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1202 13:45:00.961851 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1202 13:45:00.961884 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1202 13:45:00.961962 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1202 13:45:00.962048 6188 factory.go:656] Stopping watch factory\\\\nI1202 13:45:00.962062 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1202 13:45:00.962068 6188 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-02T13:45:32Z\\\",\\\"message\\\":\\\"Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:45:28.310804 6528 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:45:28.311154 6528 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1202 13:45:28.311267 6528 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1202 13:45:28.311628 6528 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:45:28.311858 6528 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1202 13:45:28.323105 6528 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1202 13:45:28.323127 6528 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1202 13:45:28.323171 6528 ovnkube.go:599] Stopped ovnkube\\\\nI1202 13:45:28.323195 6528 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1202 13:45:28.323400 6528 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-02T13:45:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b9tnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lslqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.100712 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b61692d-a173-459c-bac5-2f4e51c4d239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23c06e059fd4ff4a08b9aad36fa53b7d5b2abcc4ea6d5b6a2157ff5cd9302d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68fb5ddeff76d87edf2b31325292c1b9720cbe78fa293bfe0c965e43486e3beb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.114799 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3440fe6-a4ee-483b-8b9e-2cce2a799dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df9e4ce702054ec3dab6c489d458179982931e1c52b5ba7c1f0db5829530109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77edd6726ec227b73f163e3f3d3abe298b74ef61e6322c35c2c510365fdaf65c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b29abd248fa68468edcfa70f62798ceb8dcc95e6f08000fbd791f854c9d8376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.126349 4625 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7170ecd-bc74-427a-b9db-0d7d11b7e07d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-02T13:44:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69087f2c4f0daf7d97c8f803941e42b339d6482eca2edf92bc8f4d8aea9005d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46da573df86e132da8dc66092ef8a936efa16523b3869450cc4cf158412e8d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c6b1496dd33ef15eb66701070bf289b64b8fa1d9ad49f5cccccd15ede06a6f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-02T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cb8233b4a7d58bcad5177f8e31ab2e20a3bb7687080bcb27a87a3cf2a8a93e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-02T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-02T13:44:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-02T13:44:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-02T13:45:35Z is after 2025-08-24T17:21:41Z" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.184461 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.184523 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.184538 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.184552 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.184562 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:35Z","lastTransitionTime":"2025-12-02T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.287744 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.287795 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.287808 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.287828 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.287840 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:35Z","lastTransitionTime":"2025-12-02T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.390472 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.390820 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.391033 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.391231 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.391433 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:35Z","lastTransitionTime":"2025-12-02T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.494743 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.494809 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.494826 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.494851 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.494868 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:35Z","lastTransitionTime":"2025-12-02T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.597564 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.597601 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.597612 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.597626 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.597636 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:35Z","lastTransitionTime":"2025-12-02T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.700174 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.700220 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.700232 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.700249 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.700261 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:35Z","lastTransitionTime":"2025-12-02T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.803137 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.803615 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.803709 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.803784 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.803857 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:35Z","lastTransitionTime":"2025-12-02T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.855971 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.855999 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.855971 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:35 crc kubenswrapper[4625]: E1202 13:45:35.856148 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:35 crc kubenswrapper[4625]: E1202 13:45:35.856268 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:35 crc kubenswrapper[4625]: E1202 13:45:35.856400 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.906888 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.906931 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.906944 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.906966 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:35 crc kubenswrapper[4625]: I1202 13:45:35.906978 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:35Z","lastTransitionTime":"2025-12-02T13:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.010764 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.010841 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.010862 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.010889 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.010911 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:36Z","lastTransitionTime":"2025-12-02T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.114643 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.114706 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.114726 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.114745 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.114756 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:36Z","lastTransitionTime":"2025-12-02T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.218253 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.218300 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.218330 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.218350 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.218362 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:36Z","lastTransitionTime":"2025-12-02T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.321511 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.321575 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.321587 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.321606 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.321617 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:36Z","lastTransitionTime":"2025-12-02T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.424298 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.424355 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.424367 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.424385 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.424396 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:36Z","lastTransitionTime":"2025-12-02T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.527225 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.527259 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.527274 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.527299 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.527336 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:36Z","lastTransitionTime":"2025-12-02T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.630054 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.630115 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.630131 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.630154 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.630173 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:36Z","lastTransitionTime":"2025-12-02T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.733166 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.733220 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.733234 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.733254 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.733274 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:36Z","lastTransitionTime":"2025-12-02T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.836694 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.836733 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.836751 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.836776 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.836801 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:36Z","lastTransitionTime":"2025-12-02T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.856231 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:36 crc kubenswrapper[4625]: E1202 13:45:36.856392 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.939827 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.939874 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.939888 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.939905 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:36 crc kubenswrapper[4625]: I1202 13:45:36.939918 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:36Z","lastTransitionTime":"2025-12-02T13:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.042596 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.042645 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.042657 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.042675 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.042686 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:37Z","lastTransitionTime":"2025-12-02T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.145462 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.145549 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.145559 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.145580 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.145591 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:37Z","lastTransitionTime":"2025-12-02T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.249397 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.249465 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.249487 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.249515 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.249531 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:37Z","lastTransitionTime":"2025-12-02T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.352195 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.352253 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.352267 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.352287 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.352301 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:37Z","lastTransitionTime":"2025-12-02T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.454695 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.455134 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.455206 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.455269 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.455375 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:37Z","lastTransitionTime":"2025-12-02T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.558709 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.559207 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.559288 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.559393 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.559476 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:37Z","lastTransitionTime":"2025-12-02T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.662488 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.662546 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.662565 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.662587 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.662601 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:37Z","lastTransitionTime":"2025-12-02T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.766229 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.767098 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.767117 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.767144 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.767161 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:37Z","lastTransitionTime":"2025-12-02T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.855260 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.855471 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:37 crc kubenswrapper[4625]: E1202 13:45:37.855481 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:37 crc kubenswrapper[4625]: E1202 13:45:37.855689 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.855303 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:37 crc kubenswrapper[4625]: E1202 13:45:37.856130 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.870139 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.870206 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.870232 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.870264 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.870288 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:37Z","lastTransitionTime":"2025-12-02T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.973820 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.973883 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.973895 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.973917 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:37 crc kubenswrapper[4625]: I1202 13:45:37.973929 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:37Z","lastTransitionTime":"2025-12-02T13:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.077194 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.077237 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.077251 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.077270 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.077284 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:38Z","lastTransitionTime":"2025-12-02T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.180274 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.180341 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.180357 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.180376 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.180389 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:38Z","lastTransitionTime":"2025-12-02T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.282617 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.282694 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.282719 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.282747 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.282777 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:38Z","lastTransitionTime":"2025-12-02T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.385565 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.385638 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.385650 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.385674 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.385693 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:38Z","lastTransitionTime":"2025-12-02T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.489369 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.489434 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.489452 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.489478 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.489496 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:38Z","lastTransitionTime":"2025-12-02T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.592419 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.592482 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.592495 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.592522 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.592536 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:38Z","lastTransitionTime":"2025-12-02T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.695117 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.695164 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.695174 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.695192 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.695205 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:38Z","lastTransitionTime":"2025-12-02T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.797850 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.797899 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.797915 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.797935 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.797948 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:38Z","lastTransitionTime":"2025-12-02T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.855119 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:38 crc kubenswrapper[4625]: E1202 13:45:38.855264 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.904112 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.904176 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.904189 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.904207 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:38 crc kubenswrapper[4625]: I1202 13:45:38.904228 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:38Z","lastTransitionTime":"2025-12-02T13:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.007161 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.007231 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.007255 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.007284 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.007305 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:39Z","lastTransitionTime":"2025-12-02T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.110799 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.110887 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.110896 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.110915 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.110928 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:39Z","lastTransitionTime":"2025-12-02T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.214551 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.214597 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.214606 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.214623 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.214634 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:39Z","lastTransitionTime":"2025-12-02T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.318643 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.318700 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.318712 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.318728 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.318740 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:39Z","lastTransitionTime":"2025-12-02T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.421941 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.422005 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.422017 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.422046 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.422057 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:39Z","lastTransitionTime":"2025-12-02T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.525363 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.525415 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.525428 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.525448 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.525459 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:39Z","lastTransitionTime":"2025-12-02T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.628917 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.628976 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.628989 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.629009 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.629051 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:39Z","lastTransitionTime":"2025-12-02T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.732466 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.732536 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.732585 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.732609 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.732624 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:39Z","lastTransitionTime":"2025-12-02T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.836406 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.836461 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.836474 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.836495 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.836508 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:39Z","lastTransitionTime":"2025-12-02T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.855667 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.855706 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.855830 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:39 crc kubenswrapper[4625]: E1202 13:45:39.855834 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:39 crc kubenswrapper[4625]: E1202 13:45:39.856029 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:39 crc kubenswrapper[4625]: E1202 13:45:39.856141 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.939636 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.939687 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.939698 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.939719 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:39 crc kubenswrapper[4625]: I1202 13:45:39.939731 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:39Z","lastTransitionTime":"2025-12-02T13:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.042569 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.042636 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.042647 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.042670 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.042683 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:40Z","lastTransitionTime":"2025-12-02T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.145690 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.145728 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.145739 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.145754 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.145764 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:40Z","lastTransitionTime":"2025-12-02T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.247674 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.247733 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.247755 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.247777 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.247792 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:40Z","lastTransitionTime":"2025-12-02T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.350352 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.350398 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.350411 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.350429 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.350443 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:40Z","lastTransitionTime":"2025-12-02T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.453209 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.453269 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.453285 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.453360 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.453381 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:40Z","lastTransitionTime":"2025-12-02T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.556754 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.556814 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.556831 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.556851 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.556865 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:40Z","lastTransitionTime":"2025-12-02T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.659790 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.659847 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.659858 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.659875 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.659889 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:40Z","lastTransitionTime":"2025-12-02T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.762805 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.762872 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.762883 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.762902 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.762915 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:40Z","lastTransitionTime":"2025-12-02T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.855935 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:40 crc kubenswrapper[4625]: E1202 13:45:40.856113 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.864678 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.864742 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.864755 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.864769 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.864780 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:40Z","lastTransitionTime":"2025-12-02T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.968399 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.968454 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.968466 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.968486 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:40 crc kubenswrapper[4625]: I1202 13:45:40.968499 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:40Z","lastTransitionTime":"2025-12-02T13:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.071709 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.071778 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.071797 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.071825 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.071846 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:41Z","lastTransitionTime":"2025-12-02T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.174625 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.174688 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.174698 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.174745 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.174757 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:41Z","lastTransitionTime":"2025-12-02T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.277886 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.277942 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.277955 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.277977 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.277992 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:41Z","lastTransitionTime":"2025-12-02T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.380778 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.380858 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.380888 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.380911 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.380923 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:41Z","lastTransitionTime":"2025-12-02T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.484217 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.484293 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.484348 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.484373 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.484389 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:41Z","lastTransitionTime":"2025-12-02T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.587254 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.587302 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.587335 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.587353 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.587367 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:41Z","lastTransitionTime":"2025-12-02T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.690247 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.690281 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.690294 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.690371 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.690389 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:41Z","lastTransitionTime":"2025-12-02T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.794163 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.794231 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.794245 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.794270 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.794281 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:41Z","lastTransitionTime":"2025-12-02T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.855939 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.856052 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:41 crc kubenswrapper[4625]: E1202 13:45:41.856087 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.855949 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:41 crc kubenswrapper[4625]: E1202 13:45:41.856276 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:41 crc kubenswrapper[4625]: E1202 13:45:41.856494 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.903415 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.903471 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.903482 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.903501 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:41 crc kubenswrapper[4625]: I1202 13:45:41.903512 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:41Z","lastTransitionTime":"2025-12-02T13:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.006580 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.006627 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.006641 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.006668 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.006680 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:42Z","lastTransitionTime":"2025-12-02T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.109678 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.109724 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.109734 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.109750 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.109760 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:42Z","lastTransitionTime":"2025-12-02T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.212800 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.212853 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.212869 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.212889 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.212903 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:42Z","lastTransitionTime":"2025-12-02T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.316222 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.316273 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.316284 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.316303 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.316345 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:42Z","lastTransitionTime":"2025-12-02T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.419725 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.419779 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.419794 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.419812 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.419826 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:42Z","lastTransitionTime":"2025-12-02T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.523255 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.523365 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.523385 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.523775 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.523990 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:42Z","lastTransitionTime":"2025-12-02T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.626349 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.626418 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.626443 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.626474 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.626498 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:42Z","lastTransitionTime":"2025-12-02T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.729165 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.729212 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.729229 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.729248 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.729288 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:42Z","lastTransitionTime":"2025-12-02T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.831249 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.831284 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.831295 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.831518 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.831545 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:42Z","lastTransitionTime":"2025-12-02T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.855859 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:42 crc kubenswrapper[4625]: E1202 13:45:42.855996 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.934111 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.934174 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.934198 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.934229 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:42 crc kubenswrapper[4625]: I1202 13:45:42.934252 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:42Z","lastTransitionTime":"2025-12-02T13:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.037029 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.037068 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.037083 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.037102 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.037116 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:43Z","lastTransitionTime":"2025-12-02T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.140954 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.141207 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.141234 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.141261 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.141373 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:43Z","lastTransitionTime":"2025-12-02T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.244661 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.244709 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.244720 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.244740 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.244759 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:43Z","lastTransitionTime":"2025-12-02T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.347704 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.347765 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.347781 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.347802 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.347814 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:43Z","lastTransitionTime":"2025-12-02T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.452033 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.452066 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.452080 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.452098 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.452112 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:43Z","lastTransitionTime":"2025-12-02T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.554607 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.554668 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.554695 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.554720 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.554741 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:43Z","lastTransitionTime":"2025-12-02T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.658269 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.658343 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.658358 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.658379 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.658395 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:43Z","lastTransitionTime":"2025-12-02T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.761897 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.761955 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.761966 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.761987 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.762002 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:43Z","lastTransitionTime":"2025-12-02T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.855815 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.855847 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.855907 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:43 crc kubenswrapper[4625]: E1202 13:45:43.855969 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:43 crc kubenswrapper[4625]: E1202 13:45:43.856083 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:43 crc kubenswrapper[4625]: E1202 13:45:43.856175 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.864634 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.864674 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.864688 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.864707 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.864726 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:43Z","lastTransitionTime":"2025-12-02T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.909052 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.909095 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.909105 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.909121 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.909132 4625 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-02T13:45:43Z","lastTransitionTime":"2025-12-02T13:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.973971 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj"] Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.974600 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.976623 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.979014 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.980032 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 13:45:43 crc kubenswrapper[4625]: I1202 13:45:43.986870 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.019293 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.019438 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.019496 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.019719 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.019852 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.027581 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.027562664 podStartE2EDuration="1m19.027562664s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.00342046 +0000 UTC m=+99.965597555" watchObservedRunningTime="2025-12-02 13:45:44.027562664 +0000 UTC m=+99.989739739" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.062743 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podStartSLOduration=79.062710258 podStartE2EDuration="1m19.062710258s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.044649124 +0000 UTC m=+100.006826199" watchObservedRunningTime="2025-12-02 13:45:44.062710258 +0000 UTC m=+100.024887333" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.062867 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lnf62" podStartSLOduration=79.062863583 podStartE2EDuration="1m19.062863583s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.062497772 +0000 UTC m=+100.024674847" watchObservedRunningTime="2025-12-02 13:45:44.062863583 +0000 UTC m=+100.025040658" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.095750 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-4njgt" podStartSLOduration=79.095725396 podStartE2EDuration="1m19.095725396s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.095596282 +0000 UTC m=+100.057773357" watchObservedRunningTime="2025-12-02 13:45:44.095725396 +0000 UTC m=+100.057902461" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.109969 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gnnxh" podStartSLOduration=79.109948139 podStartE2EDuration="1m19.109948139s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.107630039 +0000 UTC m=+100.069807114" watchObservedRunningTime="2025-12-02 13:45:44.109948139 +0000 UTC m=+100.072125214" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.121137 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.121587 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.121711 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.121787 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.121883 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.121891 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.121925 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.122619 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.139195 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.153075 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6f5b9b0-7ba0-4cc6-9837-49bef997daa1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-94hqj\" (UID: \"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.153892 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nqfkd" podStartSLOduration=79.153879654 podStartE2EDuration="1m19.153879654s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.152707573 +0000 UTC m=+100.114884648" watchObservedRunningTime="2025-12-02 13:45:44.153879654 +0000 UTC m=+100.116056729" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.211513 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.211471567 podStartE2EDuration="35.211471567s" podCreationTimestamp="2025-12-02 13:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.192027216 +0000 UTC m=+100.154204291" watchObservedRunningTime="2025-12-02 13:45:44.211471567 +0000 UTC m=+100.173648642" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.212445 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.212436432 podStartE2EDuration="1m14.212436432s" podCreationTimestamp="2025-12-02 13:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.211285313 +0000 UTC m=+100.173462388" watchObservedRunningTime="2025-12-02 13:45:44.212436432 +0000 UTC m=+100.174613507" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.227338 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=52.227291643 podStartE2EDuration="52.227291643s" podCreationTimestamp="2025-12-02 13:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.227052087 +0000 UTC m=+100.189229172" watchObservedRunningTime="2025-12-02 13:45:44.227291643 +0000 UTC m=+100.189468718" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.292553 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.327349 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-cw895" podStartSLOduration=78.327325801 podStartE2EDuration="1m18.327325801s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.326834138 +0000 UTC m=+100.289011213" watchObservedRunningTime="2025-12-02 13:45:44.327325801 +0000 UTC m=+100.289502876" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.855475 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:44 crc kubenswrapper[4625]: E1202 13:45:44.856394 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.870231 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" event={"ID":"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1","Type":"ContainerStarted","Data":"0b485b07d40ee47afb21a76d2034ccbc00e220fde9a8e496296d9ea68b74ac66"} Dec 02 13:45:44 crc kubenswrapper[4625]: I1202 13:45:44.870292 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" event={"ID":"e6f5b9b0-7ba0-4cc6-9837-49bef997daa1","Type":"ContainerStarted","Data":"079d4d31c07390a3700f8d6af68d27bc6b96db063ba745f0083b1e54105fc2c5"} Dec 02 13:45:45 crc kubenswrapper[4625]: I1202 13:45:45.536751 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:45 crc kubenswrapper[4625]: E1202 13:45:45.536935 4625 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:45:45 crc kubenswrapper[4625]: E1202 13:45:45.537403 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs podName:23fa40dc-ba01-4997-bb3f-c9774637dc22 nodeName:}" failed. No retries permitted until 2025-12-02 13:46:49.537382717 +0000 UTC m=+165.499559792 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs") pod "network-metrics-daemon-x94k8" (UID: "23fa40dc-ba01-4997-bb3f-c9774637dc22") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 02 13:45:45 crc kubenswrapper[4625]: I1202 13:45:45.855255 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:45 crc kubenswrapper[4625]: I1202 13:45:45.855264 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:45 crc kubenswrapper[4625]: I1202 13:45:45.855459 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:45 crc kubenswrapper[4625]: E1202 13:45:45.855558 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:45 crc kubenswrapper[4625]: E1202 13:45:45.855821 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:45 crc kubenswrapper[4625]: E1202 13:45:45.856088 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:45 crc kubenswrapper[4625]: I1202 13:45:45.869886 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-94hqj" podStartSLOduration=80.869848022 podStartE2EDuration="1m20.869848022s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:44.889274837 +0000 UTC m=+100.851451982" watchObservedRunningTime="2025-12-02 13:45:45.869848022 +0000 UTC m=+101.832025097" Dec 02 13:45:45 crc kubenswrapper[4625]: I1202 13:45:45.871222 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 02 13:45:46 crc kubenswrapper[4625]: I1202 13:45:46.855871 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:46 crc kubenswrapper[4625]: E1202 13:45:46.856634 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:47 crc kubenswrapper[4625]: I1202 13:45:47.855621 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:47 crc kubenswrapper[4625]: I1202 13:45:47.855621 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:47 crc kubenswrapper[4625]: I1202 13:45:47.855711 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:47 crc kubenswrapper[4625]: E1202 13:45:47.856248 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:47 crc kubenswrapper[4625]: E1202 13:45:47.856335 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:47 crc kubenswrapper[4625]: E1202 13:45:47.856441 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:47 crc kubenswrapper[4625]: I1202 13:45:47.856527 4625 scope.go:117] "RemoveContainer" containerID="0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861" Dec 02 13:45:47 crc kubenswrapper[4625]: E1202 13:45:47.856735 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" Dec 02 13:45:47 crc kubenswrapper[4625]: I1202 13:45:47.896747 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.896720371 podStartE2EDuration="2.896720371s" podCreationTimestamp="2025-12-02 13:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:45:47.896455624 +0000 UTC m=+103.858632779" watchObservedRunningTime="2025-12-02 13:45:47.896720371 +0000 UTC m=+103.858897486" Dec 02 13:45:48 crc kubenswrapper[4625]: I1202 13:45:48.855425 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:48 crc kubenswrapper[4625]: E1202 13:45:48.855985 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:49 crc kubenswrapper[4625]: I1202 13:45:49.855917 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:49 crc kubenswrapper[4625]: I1202 13:45:49.856005 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:49 crc kubenswrapper[4625]: E1202 13:45:49.856100 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:49 crc kubenswrapper[4625]: E1202 13:45:49.856227 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:49 crc kubenswrapper[4625]: I1202 13:45:49.857080 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:49 crc kubenswrapper[4625]: E1202 13:45:49.857412 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:50 crc kubenswrapper[4625]: I1202 13:45:50.855643 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:50 crc kubenswrapper[4625]: E1202 13:45:50.855808 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:51 crc kubenswrapper[4625]: I1202 13:45:51.855378 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:51 crc kubenswrapper[4625]: I1202 13:45:51.855384 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:51 crc kubenswrapper[4625]: E1202 13:45:51.855613 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:51 crc kubenswrapper[4625]: I1202 13:45:51.855384 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:51 crc kubenswrapper[4625]: E1202 13:45:51.855720 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:51 crc kubenswrapper[4625]: E1202 13:45:51.855773 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:52 crc kubenswrapper[4625]: I1202 13:45:52.856671 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:52 crc kubenswrapper[4625]: E1202 13:45:52.856855 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:53 crc kubenswrapper[4625]: I1202 13:45:53.855386 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:53 crc kubenswrapper[4625]: I1202 13:45:53.855446 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:53 crc kubenswrapper[4625]: E1202 13:45:53.855997 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:53 crc kubenswrapper[4625]: I1202 13:45:53.855450 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:53 crc kubenswrapper[4625]: E1202 13:45:53.855855 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:53 crc kubenswrapper[4625]: E1202 13:45:53.856182 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:54 crc kubenswrapper[4625]: I1202 13:45:54.855169 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:54 crc kubenswrapper[4625]: E1202 13:45:54.856247 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:55 crc kubenswrapper[4625]: I1202 13:45:55.855580 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:55 crc kubenswrapper[4625]: I1202 13:45:55.855706 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:55 crc kubenswrapper[4625]: I1202 13:45:55.855718 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:55 crc kubenswrapper[4625]: E1202 13:45:55.856064 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:55 crc kubenswrapper[4625]: E1202 13:45:55.856227 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:55 crc kubenswrapper[4625]: E1202 13:45:55.856414 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:56 crc kubenswrapper[4625]: I1202 13:45:56.855586 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:56 crc kubenswrapper[4625]: E1202 13:45:56.856047 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:57 crc kubenswrapper[4625]: I1202 13:45:57.855796 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:57 crc kubenswrapper[4625]: I1202 13:45:57.855857 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:57 crc kubenswrapper[4625]: E1202 13:45:57.856306 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:57 crc kubenswrapper[4625]: I1202 13:45:57.855908 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:57 crc kubenswrapper[4625]: E1202 13:45:57.856396 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:45:57 crc kubenswrapper[4625]: E1202 13:45:57.856840 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:58 crc kubenswrapper[4625]: I1202 13:45:58.855784 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:45:58 crc kubenswrapper[4625]: I1202 13:45:58.857098 4625 scope.go:117] "RemoveContainer" containerID="0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861" Dec 02 13:45:58 crc kubenswrapper[4625]: E1202 13:45:58.857363 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" Dec 02 13:45:58 crc kubenswrapper[4625]: E1202 13:45:58.857544 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:45:59 crc kubenswrapper[4625]: I1202 13:45:59.855671 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:45:59 crc kubenswrapper[4625]: I1202 13:45:59.855699 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:45:59 crc kubenswrapper[4625]: I1202 13:45:59.855665 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:45:59 crc kubenswrapper[4625]: E1202 13:45:59.855801 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:45:59 crc kubenswrapper[4625]: E1202 13:45:59.856033 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:45:59 crc kubenswrapper[4625]: E1202 13:45:59.856181 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:00 crc kubenswrapper[4625]: I1202 13:46:00.855779 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:00 crc kubenswrapper[4625]: E1202 13:46:00.855972 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:01 crc kubenswrapper[4625]: I1202 13:46:01.855480 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:01 crc kubenswrapper[4625]: I1202 13:46:01.855557 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:01 crc kubenswrapper[4625]: I1202 13:46:01.855616 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:01 crc kubenswrapper[4625]: E1202 13:46:01.855739 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:01 crc kubenswrapper[4625]: E1202 13:46:01.855849 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:01 crc kubenswrapper[4625]: E1202 13:46:01.856024 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:02 crc kubenswrapper[4625]: I1202 13:46:02.855479 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:02 crc kubenswrapper[4625]: E1202 13:46:02.855814 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:03 crc kubenswrapper[4625]: I1202 13:46:03.856137 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:03 crc kubenswrapper[4625]: I1202 13:46:03.856199 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:03 crc kubenswrapper[4625]: I1202 13:46:03.856394 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:03 crc kubenswrapper[4625]: E1202 13:46:03.856750 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:03 crc kubenswrapper[4625]: E1202 13:46:03.856885 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:03 crc kubenswrapper[4625]: E1202 13:46:03.857241 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:04 crc kubenswrapper[4625]: E1202 13:46:04.828103 4625 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 02 13:46:04 crc kubenswrapper[4625]: I1202 13:46:04.855627 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:04 crc kubenswrapper[4625]: E1202 13:46:04.856911 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:05 crc kubenswrapper[4625]: E1202 13:46:05.104228 4625 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 13:46:05 crc kubenswrapper[4625]: E1202 13:46:05.619920 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd11bfd3_e3e2_47ac_8354_30dd684045dc.slice/crio-conmon-507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef.scope\": RecentStats: unable to find data in memory cache]" Dec 02 13:46:05 crc kubenswrapper[4625]: I1202 13:46:05.855671 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:05 crc kubenswrapper[4625]: I1202 13:46:05.855714 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:05 crc kubenswrapper[4625]: E1202 13:46:05.855788 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:05 crc kubenswrapper[4625]: E1202 13:46:05.855858 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:05 crc kubenswrapper[4625]: I1202 13:46:05.856430 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:05 crc kubenswrapper[4625]: E1202 13:46:05.856500 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:05 crc kubenswrapper[4625]: I1202 13:46:05.934979 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lnf62_dd11bfd3-e3e2-47ac-8354-30dd684045dc/kube-multus/1.log" Dec 02 13:46:05 crc kubenswrapper[4625]: I1202 13:46:05.935770 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lnf62_dd11bfd3-e3e2-47ac-8354-30dd684045dc/kube-multus/0.log" Dec 02 13:46:05 crc kubenswrapper[4625]: I1202 13:46:05.935809 4625 generic.go:334] "Generic (PLEG): container finished" podID="dd11bfd3-e3e2-47ac-8354-30dd684045dc" containerID="507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef" exitCode=1 Dec 02 13:46:05 crc kubenswrapper[4625]: I1202 13:46:05.935836 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lnf62" event={"ID":"dd11bfd3-e3e2-47ac-8354-30dd684045dc","Type":"ContainerDied","Data":"507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef"} Dec 02 13:46:05 crc kubenswrapper[4625]: I1202 13:46:05.935868 4625 scope.go:117] "RemoveContainer" containerID="407d15787a37403331bac018394d21fa79edcdf1a7ce3ff5e9fc9362c11407b2" Dec 02 13:46:05 crc kubenswrapper[4625]: I1202 13:46:05.936295 4625 scope.go:117] "RemoveContainer" containerID="507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef" Dec 02 13:46:05 crc kubenswrapper[4625]: E1202 13:46:05.936481 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lnf62_openshift-multus(dd11bfd3-e3e2-47ac-8354-30dd684045dc)\"" pod="openshift-multus/multus-lnf62" podUID="dd11bfd3-e3e2-47ac-8354-30dd684045dc" Dec 02 13:46:06 crc kubenswrapper[4625]: I1202 13:46:06.855775 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:06 crc kubenswrapper[4625]: E1202 13:46:06.855912 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:06 crc kubenswrapper[4625]: I1202 13:46:06.940506 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lnf62_dd11bfd3-e3e2-47ac-8354-30dd684045dc/kube-multus/1.log" Dec 02 13:46:07 crc kubenswrapper[4625]: I1202 13:46:07.855904 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:07 crc kubenswrapper[4625]: I1202 13:46:07.855939 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:07 crc kubenswrapper[4625]: I1202 13:46:07.855946 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:07 crc kubenswrapper[4625]: E1202 13:46:07.856042 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:07 crc kubenswrapper[4625]: E1202 13:46:07.856125 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:07 crc kubenswrapper[4625]: E1202 13:46:07.856264 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:08 crc kubenswrapper[4625]: I1202 13:46:08.855526 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:08 crc kubenswrapper[4625]: E1202 13:46:08.855832 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:09 crc kubenswrapper[4625]: I1202 13:46:09.856057 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:09 crc kubenswrapper[4625]: I1202 13:46:09.856066 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:09 crc kubenswrapper[4625]: I1202 13:46:09.856066 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:09 crc kubenswrapper[4625]: E1202 13:46:09.856429 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:09 crc kubenswrapper[4625]: E1202 13:46:09.856559 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:09 crc kubenswrapper[4625]: E1202 13:46:09.856641 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:09 crc kubenswrapper[4625]: I1202 13:46:09.856734 4625 scope.go:117] "RemoveContainer" containerID="0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861" Dec 02 13:46:09 crc kubenswrapper[4625]: E1202 13:46:09.856905 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lslqf_openshift-ovn-kubernetes(df437b8d-61b5-41ea-8f56-d5472e444b23)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" Dec 02 13:46:10 crc kubenswrapper[4625]: E1202 13:46:10.105885 4625 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 13:46:10 crc kubenswrapper[4625]: I1202 13:46:10.855920 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:10 crc kubenswrapper[4625]: E1202 13:46:10.856181 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:11 crc kubenswrapper[4625]: I1202 13:46:11.855356 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:11 crc kubenswrapper[4625]: I1202 13:46:11.855391 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:11 crc kubenswrapper[4625]: I1202 13:46:11.855356 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:11 crc kubenswrapper[4625]: E1202 13:46:11.855503 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:11 crc kubenswrapper[4625]: E1202 13:46:11.855599 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:11 crc kubenswrapper[4625]: E1202 13:46:11.855701 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:12 crc kubenswrapper[4625]: I1202 13:46:12.855305 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:12 crc kubenswrapper[4625]: E1202 13:46:12.855488 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:13 crc kubenswrapper[4625]: I1202 13:46:13.856036 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:13 crc kubenswrapper[4625]: I1202 13:46:13.856095 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:13 crc kubenswrapper[4625]: I1202 13:46:13.856160 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:13 crc kubenswrapper[4625]: E1202 13:46:13.856188 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:13 crc kubenswrapper[4625]: E1202 13:46:13.856384 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:13 crc kubenswrapper[4625]: E1202 13:46:13.856431 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:14 crc kubenswrapper[4625]: I1202 13:46:14.855576 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:14 crc kubenswrapper[4625]: E1202 13:46:14.856671 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:15 crc kubenswrapper[4625]: E1202 13:46:15.107634 4625 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 13:46:15 crc kubenswrapper[4625]: I1202 13:46:15.855943 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:15 crc kubenswrapper[4625]: I1202 13:46:15.855999 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:15 crc kubenswrapper[4625]: I1202 13:46:15.856075 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:15 crc kubenswrapper[4625]: E1202 13:46:15.856106 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:15 crc kubenswrapper[4625]: E1202 13:46:15.856403 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:15 crc kubenswrapper[4625]: E1202 13:46:15.856279 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:16 crc kubenswrapper[4625]: I1202 13:46:16.856102 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:16 crc kubenswrapper[4625]: E1202 13:46:16.856587 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:17 crc kubenswrapper[4625]: I1202 13:46:17.856154 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:17 crc kubenswrapper[4625]: I1202 13:46:17.856237 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:17 crc kubenswrapper[4625]: E1202 13:46:17.856298 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:17 crc kubenswrapper[4625]: I1202 13:46:17.856237 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:17 crc kubenswrapper[4625]: E1202 13:46:17.856502 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:17 crc kubenswrapper[4625]: E1202 13:46:17.856626 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:18 crc kubenswrapper[4625]: I1202 13:46:18.858040 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:18 crc kubenswrapper[4625]: E1202 13:46:18.858650 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:19 crc kubenswrapper[4625]: I1202 13:46:19.855433 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:19 crc kubenswrapper[4625]: E1202 13:46:19.855599 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:19 crc kubenswrapper[4625]: I1202 13:46:19.855774 4625 scope.go:117] "RemoveContainer" containerID="507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef" Dec 02 13:46:19 crc kubenswrapper[4625]: I1202 13:46:19.855899 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:19 crc kubenswrapper[4625]: E1202 13:46:19.856131 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:19 crc kubenswrapper[4625]: I1202 13:46:19.855298 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:19 crc kubenswrapper[4625]: E1202 13:46:19.856702 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:19 crc kubenswrapper[4625]: I1202 13:46:19.984286 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lnf62_dd11bfd3-e3e2-47ac-8354-30dd684045dc/kube-multus/1.log" Dec 02 13:46:19 crc kubenswrapper[4625]: I1202 13:46:19.984364 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lnf62" event={"ID":"dd11bfd3-e3e2-47ac-8354-30dd684045dc","Type":"ContainerStarted","Data":"63837bcbbf75cee360705ae64aca4a3b57f1b70420077e4997b6cce891c61050"} Dec 02 13:46:20 crc kubenswrapper[4625]: E1202 13:46:20.109370 4625 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 13:46:20 crc kubenswrapper[4625]: I1202 13:46:20.855565 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:20 crc kubenswrapper[4625]: E1202 13:46:20.855724 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:21 crc kubenswrapper[4625]: I1202 13:46:21.855983 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:21 crc kubenswrapper[4625]: E1202 13:46:21.856125 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:21 crc kubenswrapper[4625]: I1202 13:46:21.856006 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:21 crc kubenswrapper[4625]: E1202 13:46:21.856207 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:21 crc kubenswrapper[4625]: I1202 13:46:21.855983 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:21 crc kubenswrapper[4625]: E1202 13:46:21.856571 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:22 crc kubenswrapper[4625]: I1202 13:46:22.855766 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:22 crc kubenswrapper[4625]: E1202 13:46:22.855936 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:23 crc kubenswrapper[4625]: I1202 13:46:23.856063 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:23 crc kubenswrapper[4625]: I1202 13:46:23.856116 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:23 crc kubenswrapper[4625]: E1202 13:46:23.856241 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:23 crc kubenswrapper[4625]: E1202 13:46:23.856680 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:23 crc kubenswrapper[4625]: I1202 13:46:23.856851 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:23 crc kubenswrapper[4625]: I1202 13:46:23.857088 4625 scope.go:117] "RemoveContainer" containerID="0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861" Dec 02 13:46:23 crc kubenswrapper[4625]: E1202 13:46:23.857053 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:24 crc kubenswrapper[4625]: I1202 13:46:24.001407 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/3.log" Dec 02 13:46:24 crc kubenswrapper[4625]: I1202 13:46:24.004831 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerStarted","Data":"3dde4af5126141af55c57d1dcd42a8a0e5dbbabeec143623d2b05abe1c27097c"} Dec 02 13:46:24 crc kubenswrapper[4625]: I1202 13:46:24.005210 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:46:24 crc kubenswrapper[4625]: I1202 13:46:24.042290 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podStartSLOduration=118.042264851 podStartE2EDuration="1m58.042264851s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:24.039465565 +0000 UTC m=+140.001642690" watchObservedRunningTime="2025-12-02 13:46:24.042264851 +0000 UTC m=+140.004441946" Dec 02 13:46:24 crc kubenswrapper[4625]: I1202 13:46:24.841537 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-x94k8"] Dec 02 13:46:24 crc kubenswrapper[4625]: I1202 13:46:24.841761 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:24 crc kubenswrapper[4625]: E1202 13:46:24.841906 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:25 crc kubenswrapper[4625]: E1202 13:46:25.111848 4625 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 02 13:46:25 crc kubenswrapper[4625]: I1202 13:46:25.855824 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:25 crc kubenswrapper[4625]: I1202 13:46:25.855920 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:25 crc kubenswrapper[4625]: E1202 13:46:25.856501 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:25 crc kubenswrapper[4625]: I1202 13:46:25.855961 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:25 crc kubenswrapper[4625]: E1202 13:46:25.856650 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:25 crc kubenswrapper[4625]: E1202 13:46:25.857070 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:26 crc kubenswrapper[4625]: I1202 13:46:26.856038 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:26 crc kubenswrapper[4625]: E1202 13:46:26.856255 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:27 crc kubenswrapper[4625]: I1202 13:46:27.855440 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:27 crc kubenswrapper[4625]: I1202 13:46:27.855517 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:27 crc kubenswrapper[4625]: I1202 13:46:27.855450 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:27 crc kubenswrapper[4625]: E1202 13:46:27.855649 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:27 crc kubenswrapper[4625]: E1202 13:46:27.855779 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:27 crc kubenswrapper[4625]: E1202 13:46:27.855898 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:28 crc kubenswrapper[4625]: I1202 13:46:28.855805 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:28 crc kubenswrapper[4625]: E1202 13:46:28.856018 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-x94k8" podUID="23fa40dc-ba01-4997-bb3f-c9774637dc22" Dec 02 13:46:29 crc kubenswrapper[4625]: I1202 13:46:29.855783 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:29 crc kubenswrapper[4625]: I1202 13:46:29.855865 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:29 crc kubenswrapper[4625]: I1202 13:46:29.855863 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:29 crc kubenswrapper[4625]: E1202 13:46:29.856006 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 02 13:46:29 crc kubenswrapper[4625]: E1202 13:46:29.856132 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 02 13:46:29 crc kubenswrapper[4625]: E1202 13:46:29.856363 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 02 13:46:30 crc kubenswrapper[4625]: I1202 13:46:30.856579 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:30 crc kubenswrapper[4625]: I1202 13:46:30.859941 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 13:46:30 crc kubenswrapper[4625]: I1202 13:46:30.861376 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 13:46:31 crc kubenswrapper[4625]: I1202 13:46:31.855200 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:31 crc kubenswrapper[4625]: I1202 13:46:31.855200 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:31 crc kubenswrapper[4625]: I1202 13:46:31.855206 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:31 crc kubenswrapper[4625]: I1202 13:46:31.857930 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 13:46:31 crc kubenswrapper[4625]: I1202 13:46:31.858085 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 13:46:31 crc kubenswrapper[4625]: I1202 13:46:31.859012 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 13:46:31 crc kubenswrapper[4625]: I1202 13:46:31.860001 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.833276 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:33 crc kubenswrapper[4625]: E1202 13:46:33.833452 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:48:35.833418234 +0000 UTC m=+271.795595319 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.833511 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.833640 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.833685 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.834965 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.850075 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.851657 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.934916 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.938979 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.969162 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.976081 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 02 13:46:33 crc kubenswrapper[4625]: I1202 13:46:33.986621 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 02 13:46:34 crc kubenswrapper[4625]: W1202 13:46:34.563190 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-b7843e34740aa3e9cf6b54be27751b1ecacd673d46a3b4aa314b3be644f4fb5a WatchSource:0}: Error finding container b7843e34740aa3e9cf6b54be27751b1ecacd673d46a3b4aa314b3be644f4fb5a: Status 404 returned error can't find the container with id b7843e34740aa3e9cf6b54be27751b1ecacd673d46a3b4aa314b3be644f4fb5a Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.061297 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d5769cdb7bfdbce106047e8f104ef68a030f1a836cab638fe13e8758247cab8a"} Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.063558 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f4cbe84e800b4d50f351af7d589ec1e0050ba4652b757ec29d6b7fbbdb3a9b71"} Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.064579 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b7843e34740aa3e9cf6b54be27751b1ecacd673d46a3b4aa314b3be644f4fb5a"} Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.265445 4625 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.340146 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rxs7k"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.340833 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.345241 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p4l8q"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.345744 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.346056 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.346226 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.350711 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.351158 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397274 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-client-ca\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397341 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-config\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397365 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-config\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397396 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c94cff-bbf9-4818-925f-5d46df464248-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dhfc6\" (UID: \"a0c94cff-bbf9-4818-925f-5d46df464248\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397420 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/30f3fae9-f4d5-4f32-9498-5d2a2d801654-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397443 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59r8c\" (UniqueName: \"kubernetes.io/projected/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-kube-api-access-59r8c\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397467 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397488 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdsd\" (UniqueName: \"kubernetes.io/projected/875633b5-d52b-4d18-9322-dfbe2d73aed4-kube-api-access-xtdsd\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397509 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/30f3fae9-f4d5-4f32-9498-5d2a2d801654-images\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397530 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-serving-cert\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397593 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/875633b5-d52b-4d18-9322-dfbe2d73aed4-serving-cert\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397664 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f3fae9-f4d5-4f32-9498-5d2a2d801654-config\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397680 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-client-ca\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397698 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hft9\" (UniqueName: \"kubernetes.io/projected/a0c94cff-bbf9-4818-925f-5d46df464248-kube-api-access-6hft9\") pod \"openshift-apiserver-operator-796bbdcf4f-dhfc6\" (UID: \"a0c94cff-bbf9-4818-925f-5d46df464248\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397727 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpknl\" (UniqueName: \"kubernetes.io/projected/30f3fae9-f4d5-4f32-9498-5d2a2d801654-kube-api-access-wpknl\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.397742 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c94cff-bbf9-4818-925f-5d46df464248-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dhfc6\" (UID: \"a0c94cff-bbf9-4818-925f-5d46df464248\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.405249 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5sq66"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.405844 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.417425 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.417927 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.419577 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.419799 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.419965 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.419995 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.420292 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.420365 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.420502 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.420519 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.420681 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.420807 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.420929 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.421043 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.421217 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.421337 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.421463 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.421525 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.421683 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.421471 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.421804 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.421861 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.421924 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.422079 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.422206 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.422354 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.422478 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.422919 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.428617 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.429227 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vz5q2"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.429421 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.429641 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.431744 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.432429 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.432920 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-h2nf9"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.433357 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.434417 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dsfpw"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.443542 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.444975 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.445509 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.456274 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tjbfd"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.456949 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x8tnt"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.457288 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.457843 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.458290 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.458441 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-pr728"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.458645 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.459110 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.460813 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.461227 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.461675 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.465858 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.466322 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.466555 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.473326 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.483594 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.484097 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sc4p7"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.484463 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.484990 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.485605 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.485845 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.487874 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488010 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488095 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488295 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488351 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488487 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488512 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488611 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488701 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488816 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488954 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488995 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.489105 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.489253 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.489590 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.489702 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.489802 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.489932 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.490034 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.490067 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.490132 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.490240 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.490306 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.488826 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.495982 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rxs7k"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.496926 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.497128 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.497166 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.497266 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.497411 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.497544 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.497661 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498163 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f3fae9-f4d5-4f32-9498-5d2a2d801654-config\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498177 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498191 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-client-ca\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498212 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hft9\" (UniqueName: \"kubernetes.io/projected/a0c94cff-bbf9-4818-925f-5d46df464248-kube-api-access-6hft9\") pod \"openshift-apiserver-operator-796bbdcf4f-dhfc6\" (UID: \"a0c94cff-bbf9-4818-925f-5d46df464248\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498237 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpknl\" (UniqueName: \"kubernetes.io/projected/30f3fae9-f4d5-4f32-9498-5d2a2d801654-kube-api-access-wpknl\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498252 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c94cff-bbf9-4818-925f-5d46df464248-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dhfc6\" (UID: \"a0c94cff-bbf9-4818-925f-5d46df464248\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498269 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-client-ca\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498284 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-config\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498297 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-config\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498328 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c94cff-bbf9-4818-925f-5d46df464248-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dhfc6\" (UID: \"a0c94cff-bbf9-4818-925f-5d46df464248\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498346 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/30f3fae9-f4d5-4f32-9498-5d2a2d801654-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498362 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59r8c\" (UniqueName: \"kubernetes.io/projected/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-kube-api-access-59r8c\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498377 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498392 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdsd\" (UniqueName: \"kubernetes.io/projected/875633b5-d52b-4d18-9322-dfbe2d73aed4-kube-api-access-xtdsd\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498407 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/30f3fae9-f4d5-4f32-9498-5d2a2d801654-images\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498421 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-serving-cert\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498439 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/875633b5-d52b-4d18-9322-dfbe2d73aed4-serving-cert\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498551 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498652 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498790 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.498883 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.499741 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.500789 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-config\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.501448 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0c94cff-bbf9-4818-925f-5d46df464248-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dhfc6\" (UID: \"a0c94cff-bbf9-4818-925f-5d46df464248\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.501719 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.502065 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.502633 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.502891 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.503011 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f3fae9-f4d5-4f32-9498-5d2a2d801654-config\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.504276 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-config\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.506337 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-client-ca\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.507794 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.502071 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-client-ca\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.511033 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.511572 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.512065 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/30f3fae9-f4d5-4f32-9498-5d2a2d801654-images\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.514415 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.516336 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/875633b5-d52b-4d18-9322-dfbe2d73aed4-serving-cert\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.520090 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-serving-cert\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.520549 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c94cff-bbf9-4818-925f-5d46df464248-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dhfc6\" (UID: \"a0c94cff-bbf9-4818-925f-5d46df464248\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.524347 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.524534 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.524647 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.524694 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.524767 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.524968 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.525099 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.525200 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.525505 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.525962 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.526285 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.526414 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9zdhl"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.526422 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.526540 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.526668 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.526724 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p4l8q"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.526785 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.526829 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.526927 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527032 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527115 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527165 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527293 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527336 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527371 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.551863 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/30f3fae9-f4d5-4f32-9498-5d2a2d801654-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527778 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527825 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527861 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527894 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527955 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.527995 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.528031 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.528064 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.528099 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.528162 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.528205 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.528243 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.528291 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.601118 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.603985 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.604589 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605231 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/523dde62-fffc-455c-986c-9b69306a6225-config\") pod \"kube-apiserver-operator-766d6c64bb-9xkt2\" (UID: \"523dde62-fffc-455c-986c-9b69306a6225\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605260 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b76c3489-3b8c-4e02-b757-00e290f24fc9-service-ca-bundle\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605283 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-node-pullsecrets\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605299 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmr2h\" (UniqueName: \"kubernetes.io/projected/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-kube-api-access-vmr2h\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605331 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b76c3489-3b8c-4e02-b757-00e290f24fc9-serving-cert\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605351 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605383 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-dir\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605401 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9da966a1-56e1-4805-b03d-97b9d2ca467a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hmnvh\" (UID: \"9da966a1-56e1-4805-b03d-97b9d2ca467a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605417 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/935b4d1d-2bd9-4594-a23b-d823402ac019-serving-cert\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605435 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48f0e27b-d4d8-4118-98af-e6fa04663c27-serving-cert\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605451 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9pcv\" (UniqueName: \"kubernetes.io/projected/b76c3489-3b8c-4e02-b757-00e290f24fc9-kube-api-access-n9pcv\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605470 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-etcd-client\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605488 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605502 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/523dde62-fffc-455c-986c-9b69306a6225-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9xkt2\" (UID: \"523dde62-fffc-455c-986c-9b69306a6225\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605520 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605542 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b76c3489-3b8c-4e02-b757-00e290f24fc9-config\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605557 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/935b4d1d-2bd9-4594-a23b-d823402ac019-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605577 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dndjs\" (UniqueName: \"kubernetes.io/projected/935b4d1d-2bd9-4594-a23b-d823402ac019-kube-api-access-dndjs\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605594 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b56821-f503-4316-b494-6c53ea6037b4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2wn5\" (UID: \"b4b56821-f503-4316-b494-6c53ea6037b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605611 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605630 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605645 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-etcd-serving-ca\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605663 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5w9r\" (UniqueName: \"kubernetes.io/projected/78ff7018-4a24-46de-b71b-6a4fb8b8b8ee-kube-api-access-s5w9r\") pod \"migrator-59844c95c7-r4vx5\" (UID: \"78ff7018-4a24-46de-b71b-6a4fb8b8b8ee\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605681 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605699 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b76c3489-3b8c-4e02-b757-00e290f24fc9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605717 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605735 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmf4\" (UniqueName: \"kubernetes.io/projected/56474967-807c-4a3e-8037-45dfb0b88fe2-kube-api-access-wmmf4\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605751 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/523dde62-fffc-455c-986c-9b69306a6225-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9xkt2\" (UID: \"523dde62-fffc-455c-986c-9b69306a6225\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605767 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-audit\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605787 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/935b4d1d-2bd9-4594-a23b-d823402ac019-audit-policies\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605803 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-287d8\" (UniqueName: \"kubernetes.io/projected/9da966a1-56e1-4805-b03d-97b9d2ca467a-kube-api-access-287d8\") pod \"openshift-config-operator-7777fb866f-hmnvh\" (UID: \"9da966a1-56e1-4805-b03d-97b9d2ca467a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605822 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da966a1-56e1-4805-b03d-97b9d2ca467a-serving-cert\") pod \"openshift-config-operator-7777fb866f-hmnvh\" (UID: \"9da966a1-56e1-4805-b03d-97b9d2ca467a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605838 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605854 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/935b4d1d-2bd9-4594-a23b-d823402ac019-etcd-client\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605869 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/935b4d1d-2bd9-4594-a23b-d823402ac019-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605892 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqz4r\" (UniqueName: \"kubernetes.io/projected/b4b56821-f503-4316-b494-6c53ea6037b4-kube-api-access-tqz4r\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2wn5\" (UID: \"b4b56821-f503-4316-b494-6c53ea6037b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605907 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-policies\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605923 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-config\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605950 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605966 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-encryption-config\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.605987 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-image-import-ca\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606002 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-serving-cert\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606020 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606038 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjj4k\" (UniqueName: \"kubernetes.io/projected/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-kube-api-access-fjj4k\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606055 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mx2c\" (UniqueName: \"kubernetes.io/projected/637baf2f-239a-405a-8cde-a46bf3f7877d-kube-api-access-2mx2c\") pod \"downloads-7954f5f757-5sq66\" (UID: \"637baf2f-239a-405a-8cde-a46bf3f7877d\") " pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606072 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/935b4d1d-2bd9-4594-a23b-d823402ac019-encryption-config\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606089 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/935b4d1d-2bd9-4594-a23b-d823402ac019-audit-dir\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606111 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzjd\" (UniqueName: \"kubernetes.io/projected/f086744a-9c7e-46bc-b05a-cce4599e47aa-kube-api-access-kmzjd\") pod \"dns-operator-744455d44c-dsfpw\" (UID: \"f086744a-9c7e-46bc-b05a-cce4599e47aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606135 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b56821-f503-4316-b494-6c53ea6037b4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2wn5\" (UID: \"b4b56821-f503-4316-b494-6c53ea6037b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606152 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-562wn\" (UniqueName: \"kubernetes.io/projected/48f0e27b-d4d8-4118-98af-e6fa04663c27-kube-api-access-562wn\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606169 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606186 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606202 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606218 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-audit-dir\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606247 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48f0e27b-d4d8-4118-98af-e6fa04663c27-trusted-ca\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606266 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606284 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f086744a-9c7e-46bc-b05a-cce4599e47aa-metrics-tls\") pod \"dns-operator-744455d44c-dsfpw\" (UID: \"f086744a-9c7e-46bc-b05a-cce4599e47aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606305 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606334 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f0e27b-d4d8-4118-98af-e6fa04663c27-config\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.606710 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pqzl9"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.607371 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.607944 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.608058 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.608171 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.609107 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.612552 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.612676 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.612893 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.615589 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.616658 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.618901 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.619689 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.622884 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.624294 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.624776 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.633457 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.633731 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.634208 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.634654 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.634740 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.638086 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.647397 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-h2nf9"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.647463 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.653028 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5sq66"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.654848 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.661438 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bfk9k"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.662887 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.663014 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hm5k5"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.663878 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.663904 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.664834 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.669421 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-66bnq"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.672791 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.681012 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.683426 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8m8hp"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.684606 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.684764 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.685103 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.709229 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.709641 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.713589 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.714280 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/935b4d1d-2bd9-4594-a23b-d823402ac019-etcd-client\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.714421 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/935b4d1d-2bd9-4594-a23b-d823402ac019-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.714601 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqz4r\" (UniqueName: \"kubernetes.io/projected/b4b56821-f503-4316-b494-6c53ea6037b4-kube-api-access-tqz4r\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2wn5\" (UID: \"b4b56821-f503-4316-b494-6c53ea6037b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.714818 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-policies\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.715003 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-config\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.715890 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/935b4d1d-2bd9-4594-a23b-d823402ac019-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718026 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-policies\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718034 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718216 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-encryption-config\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718334 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-image-import-ca\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718438 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-serving-cert\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718524 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718602 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/935b4d1d-2bd9-4594-a23b-d823402ac019-encryption-config\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718696 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjj4k\" (UniqueName: \"kubernetes.io/projected/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-kube-api-access-fjj4k\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718778 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mx2c\" (UniqueName: \"kubernetes.io/projected/637baf2f-239a-405a-8cde-a46bf3f7877d-kube-api-access-2mx2c\") pod \"downloads-7954f5f757-5sq66\" (UID: \"637baf2f-239a-405a-8cde-a46bf3f7877d\") " pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718872 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/935b4d1d-2bd9-4594-a23b-d823402ac019-audit-dir\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.718952 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzjd\" (UniqueName: \"kubernetes.io/projected/f086744a-9c7e-46bc-b05a-cce4599e47aa-kube-api-access-kmzjd\") pod \"dns-operator-744455d44c-dsfpw\" (UID: \"f086744a-9c7e-46bc-b05a-cce4599e47aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719032 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b56821-f503-4316-b494-6c53ea6037b4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2wn5\" (UID: \"b4b56821-f503-4316-b494-6c53ea6037b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719103 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-562wn\" (UniqueName: \"kubernetes.io/projected/48f0e27b-d4d8-4118-98af-e6fa04663c27-kube-api-access-562wn\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719178 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719277 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719402 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719482 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-audit-dir\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719586 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719687 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48f0e27b-d4d8-4118-98af-e6fa04663c27-trusted-ca\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719779 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f086744a-9c7e-46bc-b05a-cce4599e47aa-metrics-tls\") pod \"dns-operator-744455d44c-dsfpw\" (UID: \"f086744a-9c7e-46bc-b05a-cce4599e47aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719860 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.719961 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f0e27b-d4d8-4118-98af-e6fa04663c27-config\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.720177 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b76c3489-3b8c-4e02-b757-00e290f24fc9-service-ca-bundle\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.720258 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-node-pullsecrets\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.720423 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmr2h\" (UniqueName: \"kubernetes.io/projected/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-kube-api-access-vmr2h\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.720537 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/523dde62-fffc-455c-986c-9b69306a6225-config\") pod \"kube-apiserver-operator-766d6c64bb-9xkt2\" (UID: \"523dde62-fffc-455c-986c-9b69306a6225\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.720636 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.720730 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b76c3489-3b8c-4e02-b757-00e290f24fc9-serving-cert\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.720823 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-dir\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.720917 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9da966a1-56e1-4805-b03d-97b9d2ca467a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hmnvh\" (UID: \"9da966a1-56e1-4805-b03d-97b9d2ca467a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.721024 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48f0e27b-d4d8-4118-98af-e6fa04663c27-serving-cert\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.721104 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9pcv\" (UniqueName: \"kubernetes.io/projected/b76c3489-3b8c-4e02-b757-00e290f24fc9-kube-api-access-n9pcv\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.721177 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-etcd-client\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.721249 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/935b4d1d-2bd9-4594-a23b-d823402ac019-serving-cert\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.721352 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.721442 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/523dde62-fffc-455c-986c-9b69306a6225-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9xkt2\" (UID: \"523dde62-fffc-455c-986c-9b69306a6225\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.721529 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b76c3489-3b8c-4e02-b757-00e290f24fc9-config\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.721765 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.721860 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/935b4d1d-2bd9-4594-a23b-d823402ac019-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.721952 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dndjs\" (UniqueName: \"kubernetes.io/projected/935b4d1d-2bd9-4594-a23b-d823402ac019-kube-api-access-dndjs\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722033 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b56821-f503-4316-b494-6c53ea6037b4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2wn5\" (UID: \"b4b56821-f503-4316-b494-6c53ea6037b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722106 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722186 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722271 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-etcd-serving-ca\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722379 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b76c3489-3b8c-4e02-b757-00e290f24fc9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722461 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5w9r\" (UniqueName: \"kubernetes.io/projected/78ff7018-4a24-46de-b71b-6a4fb8b8b8ee-kube-api-access-s5w9r\") pod \"migrator-59844c95c7-r4vx5\" (UID: \"78ff7018-4a24-46de-b71b-6a4fb8b8b8ee\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722534 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722625 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-audit\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722699 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722778 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmmf4\" (UniqueName: \"kubernetes.io/projected/56474967-807c-4a3e-8037-45dfb0b88fe2-kube-api-access-wmmf4\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722846 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/523dde62-fffc-455c-986c-9b69306a6225-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9xkt2\" (UID: \"523dde62-fffc-455c-986c-9b69306a6225\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722919 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-287d8\" (UniqueName: \"kubernetes.io/projected/9da966a1-56e1-4805-b03d-97b9d2ca467a-kube-api-access-287d8\") pod \"openshift-config-operator-7777fb866f-hmnvh\" (UID: \"9da966a1-56e1-4805-b03d-97b9d2ca467a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.722988 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/935b4d1d-2bd9-4594-a23b-d823402ac019-audit-policies\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.723069 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da966a1-56e1-4805-b03d-97b9d2ca467a-serving-cert\") pod \"openshift-config-operator-7777fb866f-hmnvh\" (UID: \"9da966a1-56e1-4805-b03d-97b9d2ca467a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.723136 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.723200 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.723377 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/935b4d1d-2bd9-4594-a23b-d823402ac019-audit-dir\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.724192 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.731505 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x8tnt"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.731797 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.734606 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-image-import-ca\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.734856 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.735987 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b76c3489-3b8c-4e02-b757-00e290f24fc9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.736644 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b76c3489-3b8c-4e02-b757-00e290f24fc9-config\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.737059 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-serving-cert\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.737056 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.737345 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/935b4d1d-2bd9-4594-a23b-d823402ac019-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.737451 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b56821-f503-4316-b494-6c53ea6037b4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2wn5\" (UID: \"b4b56821-f503-4316-b494-6c53ea6037b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.737491 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-audit-dir\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.737657 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-etcd-serving-ca\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.737772 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9da966a1-56e1-4805-b03d-97b9d2ca467a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hmnvh\" (UID: \"9da966a1-56e1-4805-b03d-97b9d2ca467a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.740496 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dsfpw"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.740693 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.741102 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.741913 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48f0e27b-d4d8-4118-98af-e6fa04663c27-trusted-ca\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.742265 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-audit\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.742936 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.743011 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.743585 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.743812 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/935b4d1d-2bd9-4594-a23b-d823402ac019-encryption-config\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.744202 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f0e27b-d4d8-4118-98af-e6fa04663c27-config\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.744769 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b76c3489-3b8c-4e02-b757-00e290f24fc9-service-ca-bundle\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.744841 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-node-pullsecrets\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.720385 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-config\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.745077 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.745574 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48f0e27b-d4d8-4118-98af-e6fa04663c27-serving-cert\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.746602 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-encryption-config\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.747084 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.748096 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.749342 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.749465 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.749909 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/935b4d1d-2bd9-4594-a23b-d823402ac019-audit-policies\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.750000 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-dir\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.750549 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/935b4d1d-2bd9-4594-a23b-d823402ac019-etcd-client\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.751294 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.751833 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.752788 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da966a1-56e1-4805-b03d-97b9d2ca467a-serving-cert\") pod \"openshift-config-operator-7777fb866f-hmnvh\" (UID: \"9da966a1-56e1-4805-b03d-97b9d2ca467a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.753595 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.755189 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.756007 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.758991 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b56821-f503-4316-b494-6c53ea6037b4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2wn5\" (UID: \"b4b56821-f503-4316-b494-6c53ea6037b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.759179 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.759544 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/935b4d1d-2bd9-4594-a23b-d823402ac019-serving-cert\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.759803 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f086744a-9c7e-46bc-b05a-cce4599e47aa-metrics-tls\") pod \"dns-operator-744455d44c-dsfpw\" (UID: \"f086744a-9c7e-46bc-b05a-cce4599e47aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.763925 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.764697 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.771866 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.771936 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vz5q2"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.772010 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dxltq"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.773540 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.766701 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-etcd-client\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.774473 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hrfrv"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.776688 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.778376 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tjbfd"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.780115 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b76c3489-3b8c-4e02-b757-00e290f24fc9-serving-cert\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.780805 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.782213 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.782673 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.785128 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9zdhl"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.787031 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.792779 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.797800 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.800612 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.801720 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.802886 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.804819 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-66bnq"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.806725 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.808161 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.809674 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sc4p7"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.812396 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dxltq"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.814325 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-f58fc"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.815497 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hm5k5"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.815518 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f58fc" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.816523 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.817635 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.818796 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.819817 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bfk9k"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.820905 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pr728"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.822084 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.822688 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.825630 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hrfrv"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.830278 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f58fc"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.830400 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/523dde62-fffc-455c-986c-9b69306a6225-config\") pod \"kube-apiserver-operator-766d6c64bb-9xkt2\" (UID: \"523dde62-fffc-455c-986c-9b69306a6225\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.886056 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.888089 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.888681 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.889555 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44"] Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.903542 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.923885 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.943474 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.954422 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/523dde62-fffc-455c-986c-9b69306a6225-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9xkt2\" (UID: \"523dde62-fffc-455c-986c-9b69306a6225\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:35 crc kubenswrapper[4625]: I1202 13:46:35.963690 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.009205 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hft9\" (UniqueName: \"kubernetes.io/projected/a0c94cff-bbf9-4818-925f-5d46df464248-kube-api-access-6hft9\") pod \"openshift-apiserver-operator-796bbdcf4f-dhfc6\" (UID: \"a0c94cff-bbf9-4818-925f-5d46df464248\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.031148 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpknl\" (UniqueName: \"kubernetes.io/projected/30f3fae9-f4d5-4f32-9498-5d2a2d801654-kube-api-access-wpknl\") pod \"machine-api-operator-5694c8668f-p4l8q\" (UID: \"30f3fae9-f4d5-4f32-9498-5d2a2d801654\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.040500 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59r8c\" (UniqueName: \"kubernetes.io/projected/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-kube-api-access-59r8c\") pod \"route-controller-manager-6576b87f9c-ctx7m\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.057868 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdsd\" (UniqueName: \"kubernetes.io/projected/875633b5-d52b-4d18-9322-dfbe2d73aed4-kube-api-access-xtdsd\") pod \"controller-manager-879f6c89f-rxs7k\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.071364 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e59ccd3d7547b955b4efbe8490ab0cef4cedb34b76a4672ed74f9dc57629e7fc"} Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.071516 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.073848 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"56dcec80eaa177efb7e09406f98bdd9f1571cc3b4b6e704e912fab0403f1d633"} Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.076142 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3d72cc8acbfb3a218b3ebfe8e42fa0cd4d703c9f3b666e576308ce23fa7acebf"} Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.083797 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.104810 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.123107 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.143917 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.164002 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.183449 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.203357 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.223524 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.242750 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.261640 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.272650 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.273061 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.281984 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.283985 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.290254 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.304158 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.352698 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.361083 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.363984 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.383728 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.403875 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.524373 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.524579 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.524710 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.524931 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.525097 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.543184 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.653626 4625 request.go:700] Waited for 1.040218606s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0 Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.655393 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.666997 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.667273 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.667511 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.667827 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.672279 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.683193 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.702341 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.723652 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.749810 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.763492 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.782561 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.802863 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.823149 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.845513 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.863067 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.883441 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.911405 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.930156 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.944720 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.964146 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 13:46:36 crc kubenswrapper[4625]: I1202 13:46:36.983376 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.003356 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.025056 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.045780 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.064123 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rxs7k"] Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.064704 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.080654 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6"] Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.083419 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.090295 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" event={"ID":"875633b5-d52b-4d18-9322-dfbe2d73aed4","Type":"ContainerStarted","Data":"4c72f9a530c7effc0e12932c35b4c10d727a8bb160d812d16062eaaa30744c4a"} Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.110953 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.119811 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m"] Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.123977 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 13:46:37 crc kubenswrapper[4625]: W1202 13:46:37.142752 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e0bdd20_db2f_4cc8_b939_5ccb65599bbb.slice/crio-d17b21b4c15d0a028734d04c0267748d3eb0c7fc758c916a378a70c8dbf7e4de WatchSource:0}: Error finding container d17b21b4c15d0a028734d04c0267748d3eb0c7fc758c916a378a70c8dbf7e4de: Status 404 returned error can't find the container with id d17b21b4c15d0a028734d04c0267748d3eb0c7fc758c916a378a70c8dbf7e4de Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.145015 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.153083 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p4l8q"] Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.169776 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.203660 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqz4r\" (UniqueName: \"kubernetes.io/projected/b4b56821-f503-4316-b494-6c53ea6037b4-kube-api-access-tqz4r\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2wn5\" (UID: \"b4b56821-f503-4316-b494-6c53ea6037b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.218808 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzjd\" (UniqueName: \"kubernetes.io/projected/f086744a-9c7e-46bc-b05a-cce4599e47aa-kube-api-access-kmzjd\") pod \"dns-operator-744455d44c-dsfpw\" (UID: \"f086744a-9c7e-46bc-b05a-cce4599e47aa\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.224176 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.242331 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.259423 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-562wn\" (UniqueName: \"kubernetes.io/projected/48f0e27b-d4d8-4118-98af-e6fa04663c27-kube-api-access-562wn\") pod \"console-operator-58897d9998-h2nf9\" (UID: \"48f0e27b-d4d8-4118-98af-e6fa04663c27\") " pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.263800 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.285179 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.303052 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.326641 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.361616 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjj4k\" (UniqueName: \"kubernetes.io/projected/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-kube-api-access-fjj4k\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.384441 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mx2c\" (UniqueName: \"kubernetes.io/projected/637baf2f-239a-405a-8cde-a46bf3f7877d-kube-api-access-2mx2c\") pod \"downloads-7954f5f757-5sq66\" (UID: \"637baf2f-239a-405a-8cde-a46bf3f7877d\") " pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.414052 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/523dde62-fffc-455c-986c-9b69306a6225-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9xkt2\" (UID: \"523dde62-fffc-455c-986c-9b69306a6225\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.426440 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/244411d4-7a54-4ce6-9eb8-cbaa12838fc0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wxnpz\" (UID: \"244411d4-7a54-4ce6-9eb8-cbaa12838fc0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.450047 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.450894 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.454078 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dndjs\" (UniqueName: \"kubernetes.io/projected/935b4d1d-2bd9-4594-a23b-d823402ac019-kube-api-access-dndjs\") pod \"apiserver-7bbb656c7d-rqhcv\" (UID: \"935b4d1d-2bd9-4594-a23b-d823402ac019\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.480708 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmr2h\" (UniqueName: \"kubernetes.io/projected/2d498f8f-b8ca-41f0-96a7-d1c170a2fa15-kube-api-access-vmr2h\") pod \"apiserver-76f77b778f-tjbfd\" (UID: \"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15\") " pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.487769 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.493828 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmmf4\" (UniqueName: \"kubernetes.io/projected/56474967-807c-4a3e-8037-45dfb0b88fe2-kube-api-access-wmmf4\") pod \"oauth-openshift-558db77b4-x8tnt\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.498844 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5w9r\" (UniqueName: \"kubernetes.io/projected/78ff7018-4a24-46de-b71b-6a4fb8b8b8ee-kube-api-access-s5w9r\") pod \"migrator-59844c95c7-r4vx5\" (UID: \"78ff7018-4a24-46de-b71b-6a4fb8b8b8ee\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.520396 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.523423 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9pcv\" (UniqueName: \"kubernetes.io/projected/b76c3489-3b8c-4e02-b757-00e290f24fc9-kube-api-access-n9pcv\") pod \"authentication-operator-69f744f599-vz5q2\" (UID: \"b76c3489-3b8c-4e02-b757-00e290f24fc9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.525106 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5"] Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.535145 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.539093 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-287d8\" (UniqueName: \"kubernetes.io/projected/9da966a1-56e1-4805-b03d-97b9d2ca467a-kube-api-access-287d8\") pod \"openshift-config-operator-7777fb866f-hmnvh\" (UID: \"9da966a1-56e1-4805-b03d-97b9d2ca467a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.543750 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.566460 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.570712 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.570976 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.583573 4625 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.590875 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.600036 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.603494 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.607942 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.616622 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.623362 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.643545 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.661615 4625 request.go:700] Waited for 1.8457729s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.664255 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.684303 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.704035 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 13:46:37 crc kubenswrapper[4625]: I1202 13:46:37.725017 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.095099 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" event={"ID":"30f3fae9-f4d5-4f32-9498-5d2a2d801654","Type":"ContainerStarted","Data":"5ab389640fc62208d1b79365ce0941179e34b713c5b86041eafcaa2225f0472c"} Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.095164 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" event={"ID":"30f3fae9-f4d5-4f32-9498-5d2a2d801654","Type":"ContainerStarted","Data":"ee89ab7bac7ce277693d3400c8ca11b183472bcf04e09320b570f00ff6c227b7"} Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.096377 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" event={"ID":"a0c94cff-bbf9-4818-925f-5d46df464248","Type":"ContainerStarted","Data":"324ee92d7bd9e525983a218cb41ca6e5d81866a5b01acd90491f4156ee682b86"} Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.096416 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" event={"ID":"a0c94cff-bbf9-4818-925f-5d46df464248","Type":"ContainerStarted","Data":"2082a66d39a7cad9197f89271941b7d32a9a005ea205a90b7e192de76c638176"} Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.097745 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" event={"ID":"875633b5-d52b-4d18-9322-dfbe2d73aed4","Type":"ContainerStarted","Data":"89b44ad264f6673ce9311ec01ae81859911e730706f0946f2b4193981ad0ff1b"} Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.097971 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.100145 4625 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rxs7k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.100188 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" podUID="875633b5-d52b-4d18-9322-dfbe2d73aed4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.100546 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" event={"ID":"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb","Type":"ContainerStarted","Data":"6cf0d7911954d2657c932d9237dbc66399faa07da992a20a3059ec7590458599"} Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.100614 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" event={"ID":"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb","Type":"ContainerStarted","Data":"d17b21b4c15d0a028734d04c0267748d3eb0c7fc758c916a378a70c8dbf7e4de"} Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.101219 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.102031 4625 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ctx7m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 02 13:46:38 crc kubenswrapper[4625]: I1202 13:46:38.102093 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" podUID="1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.460106 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-certificates\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.460181 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-trusted-ca\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.460236 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c87d97fb-8391-4f0f-8b3d-a404721de262-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.460396 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-tls\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.460464 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-bound-sa-token\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.460516 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.460546 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpsks\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-kube-api-access-wpsks\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.460587 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c87d97fb-8391-4f0f-8b3d-a404721de262-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: E1202 13:46:39.462738 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:39.962706739 +0000 UTC m=+155.924883824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.581980 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582144 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4667cfe8-6ad8-461f-9e16-79a64a33642b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gndll\" (UID: \"4667cfe8-6ad8-461f-9e16-79a64a33642b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582165 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-295bq\" (UniqueName: \"kubernetes.io/projected/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-kube-api-access-295bq\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582184 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8pwk\" (UniqueName: \"kubernetes.io/projected/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-kube-api-access-p8pwk\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582200 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582245 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-metrics-tls\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582262 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-oauth-config\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582279 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-oauth-serving-cert\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582297 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-tls\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582327 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-auth-proxy-config\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582344 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4nv\" (UniqueName: \"kubernetes.io/projected/35ca6406-d63b-41a2-9217-85afd26abacd-kube-api-access-kg4nv\") pod \"cluster-samples-operator-665b6dd947-bnk2h\" (UID: \"35ca6406-d63b-41a2-9217-85afd26abacd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582375 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b28f8782-c7d3-4034-b269-90be9cbd9eec-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n5hqd\" (UID: \"b28f8782-c7d3-4034-b269-90be9cbd9eec\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582404 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-bound-sa-token\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582422 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4667cfe8-6ad8-461f-9e16-79a64a33642b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gndll\" (UID: \"4667cfe8-6ad8-461f-9e16-79a64a33642b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582436 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-config\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582460 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpsks\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-kube-api-access-wpsks\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582477 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmqk\" (UniqueName: \"kubernetes.io/projected/b28f8782-c7d3-4034-b269-90be9cbd9eec-kube-api-access-qcmqk\") pod \"machine-config-controller-84d6567774-n5hqd\" (UID: \"b28f8782-c7d3-4034-b269-90be9cbd9eec\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582495 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-machine-approver-tls\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582525 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b28f8782-c7d3-4034-b269-90be9cbd9eec-proxy-tls\") pod \"machine-config-controller-84d6567774-n5hqd\" (UID: \"b28f8782-c7d3-4034-b269-90be9cbd9eec\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582542 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-trusted-ca\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582583 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/35ca6406-d63b-41a2-9217-85afd26abacd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bnk2h\" (UID: \"35ca6406-d63b-41a2-9217-85afd26abacd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582604 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c87d97fb-8391-4f0f-8b3d-a404721de262-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582624 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-trusted-ca-bundle\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582645 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4667cfe8-6ad8-461f-9e16-79a64a33642b-config\") pod \"kube-controller-manager-operator-78b949d7b-gndll\" (UID: \"4667cfe8-6ad8-461f-9e16-79a64a33642b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582666 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-certificates\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582680 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-trusted-ca\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582697 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-config\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582723 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c87d97fb-8391-4f0f-8b3d-a404721de262-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582781 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htrxf\" (UniqueName: \"kubernetes.io/projected/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-kube-api-access-htrxf\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582807 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-service-ca\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.582833 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-serving-cert\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: E1202 13:46:39.585974 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:40.08593045 +0000 UTC m=+156.048107515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.591196 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-tls\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.684927 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82577eb4-f869-4200-b5b4-929920b4272a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfxdf\" (UID: \"82577eb4-f869-4200-b5b4-929920b4272a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.684962 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-etcd-client\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.684987 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-trusted-ca-bundle\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685004 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/748aaca7-daf1-4bd8-b397-1b3c6eedbc4a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hfdtw\" (UID: \"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685024 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4667cfe8-6ad8-461f-9e16-79a64a33642b-config\") pod \"kube-controller-manager-operator-78b949d7b-gndll\" (UID: \"4667cfe8-6ad8-461f-9e16-79a64a33642b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685042 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hm5k5\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685057 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0f73e926-f5cf-46b3-afc7-0fe387cf5704-certs\") pod \"machine-config-server-8m8hp\" (UID: \"0f73e926-f5cf-46b3-afc7-0fe387cf5704\") " pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685072 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d9ab4834-e296-4113-ad72-c1e6c86b3ee6-signing-key\") pod \"service-ca-9c57cc56f-66bnq\" (UID: \"d9ab4834-e296-4113-ad72-c1e6c86b3ee6\") " pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685102 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-config\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685117 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnpd4\" (UniqueName: \"kubernetes.io/projected/4065d249-ffb1-406a-9e88-b6b97cf70f2a-kube-api-access-bnpd4\") pod \"marketplace-operator-79b997595-hm5k5\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685175 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htrxf\" (UniqueName: \"kubernetes.io/projected/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-kube-api-access-htrxf\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685192 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7c402f2a-3e9f-4eba-a881-a59ae3626f5a-srv-cert\") pod \"catalog-operator-68c6474976-qcjf2\" (UID: \"7c402f2a-3e9f-4eba-a881-a59ae3626f5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685206 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdbs9\" (UniqueName: \"kubernetes.io/projected/7c402f2a-3e9f-4eba-a881-a59ae3626f5a-kube-api-access-jdbs9\") pod \"catalog-operator-68c6474976-qcjf2\" (UID: \"7c402f2a-3e9f-4eba-a881-a59ae3626f5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685221 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-csi-data-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685237 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf2cc\" (UniqueName: \"kubernetes.io/projected/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-kube-api-access-gf2cc\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685262 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-serving-cert\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685278 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-service-ca-bundle\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685292 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-etcd-ca\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685396 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-295bq\" (UniqueName: \"kubernetes.io/projected/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-kube-api-access-295bq\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685474 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-config\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685502 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748aaca7-daf1-4bd8-b397-1b3c6eedbc4a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hfdtw\" (UID: \"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685566 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-metrics-tls\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685645 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0f73e926-f5cf-46b3-afc7-0fe387cf5704-node-bootstrap-token\") pod \"machine-config-server-8m8hp\" (UID: \"0f73e926-f5cf-46b3-afc7-0fe387cf5704\") " pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685663 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52jj5\" (UniqueName: \"kubernetes.io/projected/47088586-bc86-4ee1-99db-31eb2eb14ffc-kube-api-access-52jj5\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685688 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-oauth-serving-cert\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685704 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb264\" (UniqueName: \"kubernetes.io/projected/87ed22c8-2d67-46b7-91e9-292293517801-kube-api-access-jb264\") pod \"service-ca-operator-777779d784-qcgnq\" (UID: \"87ed22c8-2d67-46b7-91e9-292293517801\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685741 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-auth-proxy-config\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685756 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjt9v\" (UniqueName: \"kubernetes.io/projected/689505d7-9623-458a-b60a-e584c405540d-kube-api-access-hjt9v\") pod \"dns-default-hrfrv\" (UID: \"689505d7-9623-458a-b60a-e584c405540d\") " pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685814 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82577eb4-f869-4200-b5b4-929920b4272a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfxdf\" (UID: \"82577eb4-f869-4200-b5b4-929920b4272a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685846 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7cnv\" (UniqueName: \"kubernetes.io/projected/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-kube-api-access-f7cnv\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685861 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87ed22c8-2d67-46b7-91e9-292293517801-serving-cert\") pod \"service-ca-operator-777779d784-qcgnq\" (UID: \"87ed22c8-2d67-46b7-91e9-292293517801\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685953 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bssbg\" (UniqueName: \"kubernetes.io/projected/c8bad892-59d1-45b5-a388-156353675860-kube-api-access-bssbg\") pod \"collect-profiles-29411385-jvp44\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685976 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-tmpfs\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.685992 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7c402f2a-3e9f-4eba-a881-a59ae3626f5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-qcjf2\" (UID: \"7c402f2a-3e9f-4eba-a881-a59ae3626f5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.686008 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-machine-approver-tls\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.686022 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47088586-bc86-4ee1-99db-31eb2eb14ffc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.686037 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-socket-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.686063 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcmqk\" (UniqueName: \"kubernetes.io/projected/b28f8782-c7d3-4034-b269-90be9cbd9eec-kube-api-access-qcmqk\") pod \"machine-config-controller-84d6567774-n5hqd\" (UID: \"b28f8782-c7d3-4034-b269-90be9cbd9eec\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.686079 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b28f8782-c7d3-4034-b269-90be9cbd9eec-proxy-tls\") pod \"machine-config-controller-84d6567774-n5hqd\" (UID: \"b28f8782-c7d3-4034-b269-90be9cbd9eec\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.686093 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7e8d817d-9152-48c4-b7b0-f9df76891753-srv-cert\") pod \"olm-operator-6b444d44fb-g2crb\" (UID: \"7e8d817d-9152-48c4-b7b0-f9df76891753\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690184 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-webhook-cert\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690230 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-955cd\" (UniqueName: \"kubernetes.io/projected/b067825c-8b40-4b11-bf8b-52ebf31ec4ba-kube-api-access-955cd\") pod \"package-server-manager-789f6589d5-wqc6c\" (UID: \"b067825c-8b40-4b11-bf8b-52ebf31ec4ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690294 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hm5k5\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690336 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/35ca6406-d63b-41a2-9217-85afd26abacd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bnk2h\" (UID: \"35ca6406-d63b-41a2-9217-85afd26abacd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690376 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-apiservice-cert\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690396 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/748aaca7-daf1-4bd8-b397-1b3c6eedbc4a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hfdtw\" (UID: \"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690474 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lztlf\" (UniqueName: \"kubernetes.io/projected/d9ab4834-e296-4113-ad72-c1e6c86b3ee6-kube-api-access-lztlf\") pod \"service-ca-9c57cc56f-66bnq\" (UID: \"d9ab4834-e296-4113-ad72-c1e6c86b3ee6\") " pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690506 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7490ca6-a9c3-4103-98e8-951179079bf7-cert\") pod \"ingress-canary-f58fc\" (UID: \"a7490ca6-a9c3-4103-98e8-951179079bf7\") " pod="openshift-ingress-canary/ingress-canary-f58fc" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690521 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47088586-bc86-4ee1-99db-31eb2eb14ffc-images\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690537 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbnxv\" (UniqueName: \"kubernetes.io/projected/82577eb4-f869-4200-b5b4-929920b4272a-kube-api-access-zbnxv\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfxdf\" (UID: \"82577eb4-f869-4200-b5b4-929920b4272a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690571 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b067825c-8b40-4b11-bf8b-52ebf31ec4ba-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wqc6c\" (UID: \"b067825c-8b40-4b11-bf8b-52ebf31ec4ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690600 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-service-ca\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690615 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e4e6fc1-bf89-455e-8409-31ba869ffdf1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bfk9k\" (UID: \"0e4e6fc1-bf89-455e-8409-31ba869ffdf1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690630 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-registration-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690648 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-etcd-service-ca\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690663 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p57n\" (UniqueName: \"kubernetes.io/projected/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-kube-api-access-8p57n\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690678 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d9ab4834-e296-4113-ad72-c1e6c86b3ee6-signing-cabundle\") pod \"service-ca-9c57cc56f-66bnq\" (UID: \"d9ab4834-e296-4113-ad72-c1e6c86b3ee6\") " pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690706 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfa9a143-ca0d-4f32-b9a7-b2acb327bedc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sjskb\" (UID: \"bfa9a143-ca0d-4f32-b9a7-b2acb327bedc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690747 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4667cfe8-6ad8-461f-9e16-79a64a33642b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gndll\" (UID: \"4667cfe8-6ad8-461f-9e16-79a64a33642b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690764 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-metrics-certs\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690779 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ed22c8-2d67-46b7-91e9-292293517801-config\") pod \"service-ca-operator-777779d784-qcgnq\" (UID: \"87ed22c8-2d67-46b7-91e9-292293517801\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690810 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8pwk\" (UniqueName: \"kubernetes.io/projected/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-kube-api-access-p8pwk\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690836 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/689505d7-9623-458a-b60a-e584c405540d-config-volume\") pod \"dns-default-hrfrv\" (UID: \"689505d7-9623-458a-b60a-e584c405540d\") " pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690850 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8bad892-59d1-45b5-a388-156353675860-config-volume\") pod \"collect-profiles-29411385-jvp44\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690877 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690892 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/689505d7-9623-458a-b60a-e584c405540d-metrics-tls\") pod \"dns-default-hrfrv\" (UID: \"689505d7-9623-458a-b60a-e584c405540d\") " pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690923 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47088586-bc86-4ee1-99db-31eb2eb14ffc-proxy-tls\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690939 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-plugins-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690956 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-default-certificate\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.690975 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-oauth-config\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691002 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4nv\" (UniqueName: \"kubernetes.io/projected/35ca6406-d63b-41a2-9217-85afd26abacd-kube-api-access-kg4nv\") pod \"cluster-samples-operator-665b6dd947-bnk2h\" (UID: \"35ca6406-d63b-41a2-9217-85afd26abacd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691019 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzft7\" (UniqueName: \"kubernetes.io/projected/f328b6e0-83f8-4ef2-abed-fe6dbabab077-kube-api-access-bzft7\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691047 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b28f8782-c7d3-4034-b269-90be9cbd9eec-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n5hqd\" (UID: \"b28f8782-c7d3-4034-b269-90be9cbd9eec\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691063 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twd8l\" (UniqueName: \"kubernetes.io/projected/0f73e926-f5cf-46b3-afc7-0fe387cf5704-kube-api-access-twd8l\") pod \"machine-config-server-8m8hp\" (UID: \"0f73e926-f5cf-46b3-afc7-0fe387cf5704\") " pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691088 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8bad892-59d1-45b5-a388-156353675860-secret-volume\") pod \"collect-profiles-29411385-jvp44\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691121 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gbvj\" (UniqueName: \"kubernetes.io/projected/a7490ca6-a9c3-4103-98e8-951179079bf7-kube-api-access-6gbvj\") pod \"ingress-canary-f58fc\" (UID: \"a7490ca6-a9c3-4103-98e8-951179079bf7\") " pod="openshift-ingress-canary/ingress-canary-f58fc" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691137 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-stats-auth\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691152 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7e8d817d-9152-48c4-b7b0-f9df76891753-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g2crb\" (UID: \"7e8d817d-9152-48c4-b7b0-f9df76891753\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691169 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4667cfe8-6ad8-461f-9e16-79a64a33642b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gndll\" (UID: \"4667cfe8-6ad8-461f-9e16-79a64a33642b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691184 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-serving-cert\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691201 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-config\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691218 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc9d2\" (UniqueName: \"kubernetes.io/projected/7e8d817d-9152-48c4-b7b0-f9df76891753-kube-api-access-mc9d2\") pod \"olm-operator-6b444d44fb-g2crb\" (UID: \"7e8d817d-9152-48c4-b7b0-f9df76891753\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691244 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691264 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-mountpoint-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691281 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pfkz\" (UniqueName: \"kubernetes.io/projected/0e4e6fc1-bf89-455e-8409-31ba869ffdf1-kube-api-access-4pfkz\") pod \"multus-admission-controller-857f4d67dd-bfk9k\" (UID: \"0e4e6fc1-bf89-455e-8409-31ba869ffdf1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691319 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-trusted-ca\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.691341 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9lfg\" (UniqueName: \"kubernetes.io/projected/bfa9a143-ca0d-4f32-b9a7-b2acb327bedc-kube-api-access-h9lfg\") pod \"control-plane-machine-set-operator-78cbb6b69f-sjskb\" (UID: \"bfa9a143-ca0d-4f32-b9a7-b2acb327bedc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.693160 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-trusted-ca-bundle\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.693931 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4667cfe8-6ad8-461f-9e16-79a64a33642b-config\") pod \"kube-controller-manager-operator-78b949d7b-gndll\" (UID: \"4667cfe8-6ad8-461f-9e16-79a64a33642b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.745323 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-oauth-serving-cert\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.745791 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-metrics-tls\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.802293 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-serving-cert\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.803502 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-config\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.804968 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-machine-approver-tls\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.808793 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/35ca6406-d63b-41a2-9217-85afd26abacd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bnk2h\" (UID: \"35ca6406-d63b-41a2-9217-85afd26abacd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.811544 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-auth-proxy-config\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: E1202 13:46:39.811801 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:40.311774061 +0000 UTC m=+156.273951136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.812627 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-service-ca\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.815438 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b28f8782-c7d3-4034-b269-90be9cbd9eec-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n5hqd\" (UID: \"b28f8782-c7d3-4034-b269-90be9cbd9eec\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.817993 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-config\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.821218 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4667cfe8-6ad8-461f-9e16-79a64a33642b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gndll\" (UID: \"4667cfe8-6ad8-461f-9e16-79a64a33642b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.828128 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-oauth-config\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.828728 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcmqk\" (UniqueName: \"kubernetes.io/projected/b28f8782-c7d3-4034-b269-90be9cbd9eec-kube-api-access-qcmqk\") pod \"machine-config-controller-84d6567774-n5hqd\" (UID: \"b28f8782-c7d3-4034-b269-90be9cbd9eec\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.837285 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-trusted-ca\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.838039 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-295bq\" (UniqueName: \"kubernetes.io/projected/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-kube-api-access-295bq\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.845570 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b28f8782-c7d3-4034-b269-90be9cbd9eec-proxy-tls\") pod \"machine-config-controller-84d6567774-n5hqd\" (UID: \"b28f8782-c7d3-4034-b269-90be9cbd9eec\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.905423 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htrxf\" (UniqueName: \"kubernetes.io/projected/5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef-kube-api-access-htrxf\") pod \"machine-approver-56656f9798-7wbq4\" (UID: \"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.906748 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c0a10a7-ccfe-45a2-8b74-df21b80d67df-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wsvpt\" (UID: \"6c0a10a7-ccfe-45a2-8b74-df21b80d67df\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.910066 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4667cfe8-6ad8-461f-9e16-79a64a33642b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gndll\" (UID: \"4667cfe8-6ad8-461f-9e16-79a64a33642b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.918980 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919123 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hm5k5\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919146 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0f73e926-f5cf-46b3-afc7-0fe387cf5704-certs\") pod \"machine-config-server-8m8hp\" (UID: \"0f73e926-f5cf-46b3-afc7-0fe387cf5704\") " pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919163 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d9ab4834-e296-4113-ad72-c1e6c86b3ee6-signing-key\") pod \"service-ca-9c57cc56f-66bnq\" (UID: \"d9ab4834-e296-4113-ad72-c1e6c86b3ee6\") " pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919183 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnpd4\" (UniqueName: \"kubernetes.io/projected/4065d249-ffb1-406a-9e88-b6b97cf70f2a-kube-api-access-bnpd4\") pod \"marketplace-operator-79b997595-hm5k5\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919209 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7c402f2a-3e9f-4eba-a881-a59ae3626f5a-srv-cert\") pod \"catalog-operator-68c6474976-qcjf2\" (UID: \"7c402f2a-3e9f-4eba-a881-a59ae3626f5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919224 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdbs9\" (UniqueName: \"kubernetes.io/projected/7c402f2a-3e9f-4eba-a881-a59ae3626f5a-kube-api-access-jdbs9\") pod \"catalog-operator-68c6474976-qcjf2\" (UID: \"7c402f2a-3e9f-4eba-a881-a59ae3626f5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919242 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-csi-data-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919258 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf2cc\" (UniqueName: \"kubernetes.io/projected/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-kube-api-access-gf2cc\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919274 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-service-ca-bundle\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919292 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-etcd-ca\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919442 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748aaca7-daf1-4bd8-b397-1b3c6eedbc4a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hfdtw\" (UID: \"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919457 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-config\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919474 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0f73e926-f5cf-46b3-afc7-0fe387cf5704-node-bootstrap-token\") pod \"machine-config-server-8m8hp\" (UID: \"0f73e926-f5cf-46b3-afc7-0fe387cf5704\") " pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919489 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52jj5\" (UniqueName: \"kubernetes.io/projected/47088586-bc86-4ee1-99db-31eb2eb14ffc-kube-api-access-52jj5\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919505 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb264\" (UniqueName: \"kubernetes.io/projected/87ed22c8-2d67-46b7-91e9-292293517801-kube-api-access-jb264\") pod \"service-ca-operator-777779d784-qcgnq\" (UID: \"87ed22c8-2d67-46b7-91e9-292293517801\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919521 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjt9v\" (UniqueName: \"kubernetes.io/projected/689505d7-9623-458a-b60a-e584c405540d-kube-api-access-hjt9v\") pod \"dns-default-hrfrv\" (UID: \"689505d7-9623-458a-b60a-e584c405540d\") " pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919539 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82577eb4-f869-4200-b5b4-929920b4272a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfxdf\" (UID: \"82577eb4-f869-4200-b5b4-929920b4272a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919557 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7cnv\" (UniqueName: \"kubernetes.io/projected/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-kube-api-access-f7cnv\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919571 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87ed22c8-2d67-46b7-91e9-292293517801-serving-cert\") pod \"service-ca-operator-777779d784-qcgnq\" (UID: \"87ed22c8-2d67-46b7-91e9-292293517801\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919587 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bssbg\" (UniqueName: \"kubernetes.io/projected/c8bad892-59d1-45b5-a388-156353675860-kube-api-access-bssbg\") pod \"collect-profiles-29411385-jvp44\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919603 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-tmpfs\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919617 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7c402f2a-3e9f-4eba-a881-a59ae3626f5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-qcjf2\" (UID: \"7c402f2a-3e9f-4eba-a881-a59ae3626f5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919633 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47088586-bc86-4ee1-99db-31eb2eb14ffc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919648 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-socket-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919663 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7e8d817d-9152-48c4-b7b0-f9df76891753-srv-cert\") pod \"olm-operator-6b444d44fb-g2crb\" (UID: \"7e8d817d-9152-48c4-b7b0-f9df76891753\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919681 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-webhook-cert\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919699 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955cd\" (UniqueName: \"kubernetes.io/projected/b067825c-8b40-4b11-bf8b-52ebf31ec4ba-kube-api-access-955cd\") pod \"package-server-manager-789f6589d5-wqc6c\" (UID: \"b067825c-8b40-4b11-bf8b-52ebf31ec4ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919718 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hm5k5\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919734 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-apiservice-cert\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919748 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/748aaca7-daf1-4bd8-b397-1b3c6eedbc4a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hfdtw\" (UID: \"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919778 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lztlf\" (UniqueName: \"kubernetes.io/projected/d9ab4834-e296-4113-ad72-c1e6c86b3ee6-kube-api-access-lztlf\") pod \"service-ca-9c57cc56f-66bnq\" (UID: \"d9ab4834-e296-4113-ad72-c1e6c86b3ee6\") " pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919800 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7490ca6-a9c3-4103-98e8-951179079bf7-cert\") pod \"ingress-canary-f58fc\" (UID: \"a7490ca6-a9c3-4103-98e8-951179079bf7\") " pod="openshift-ingress-canary/ingress-canary-f58fc" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919814 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47088586-bc86-4ee1-99db-31eb2eb14ffc-images\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919829 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbnxv\" (UniqueName: \"kubernetes.io/projected/82577eb4-f869-4200-b5b4-929920b4272a-kube-api-access-zbnxv\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfxdf\" (UID: \"82577eb4-f869-4200-b5b4-929920b4272a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919846 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b067825c-8b40-4b11-bf8b-52ebf31ec4ba-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wqc6c\" (UID: \"b067825c-8b40-4b11-bf8b-52ebf31ec4ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919862 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e4e6fc1-bf89-455e-8409-31ba869ffdf1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bfk9k\" (UID: \"0e4e6fc1-bf89-455e-8409-31ba869ffdf1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919877 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-registration-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919892 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-etcd-service-ca\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919907 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p57n\" (UniqueName: \"kubernetes.io/projected/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-kube-api-access-8p57n\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919922 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d9ab4834-e296-4113-ad72-c1e6c86b3ee6-signing-cabundle\") pod \"service-ca-9c57cc56f-66bnq\" (UID: \"d9ab4834-e296-4113-ad72-c1e6c86b3ee6\") " pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919938 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfa9a143-ca0d-4f32-b9a7-b2acb327bedc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sjskb\" (UID: \"bfa9a143-ca0d-4f32-b9a7-b2acb327bedc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919958 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-metrics-certs\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919972 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ed22c8-2d67-46b7-91e9-292293517801-config\") pod \"service-ca-operator-777779d784-qcgnq\" (UID: \"87ed22c8-2d67-46b7-91e9-292293517801\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.919994 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/689505d7-9623-458a-b60a-e584c405540d-config-volume\") pod \"dns-default-hrfrv\" (UID: \"689505d7-9623-458a-b60a-e584c405540d\") " pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920009 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8bad892-59d1-45b5-a388-156353675860-config-volume\") pod \"collect-profiles-29411385-jvp44\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920025 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/689505d7-9623-458a-b60a-e584c405540d-metrics-tls\") pod \"dns-default-hrfrv\" (UID: \"689505d7-9623-458a-b60a-e584c405540d\") " pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920040 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47088586-bc86-4ee1-99db-31eb2eb14ffc-proxy-tls\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920055 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-plugins-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920070 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-default-certificate\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920092 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzft7\" (UniqueName: \"kubernetes.io/projected/f328b6e0-83f8-4ef2-abed-fe6dbabab077-kube-api-access-bzft7\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920110 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twd8l\" (UniqueName: \"kubernetes.io/projected/0f73e926-f5cf-46b3-afc7-0fe387cf5704-kube-api-access-twd8l\") pod \"machine-config-server-8m8hp\" (UID: \"0f73e926-f5cf-46b3-afc7-0fe387cf5704\") " pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920125 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8bad892-59d1-45b5-a388-156353675860-secret-volume\") pod \"collect-profiles-29411385-jvp44\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920145 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gbvj\" (UniqueName: \"kubernetes.io/projected/a7490ca6-a9c3-4103-98e8-951179079bf7-kube-api-access-6gbvj\") pod \"ingress-canary-f58fc\" (UID: \"a7490ca6-a9c3-4103-98e8-951179079bf7\") " pod="openshift-ingress-canary/ingress-canary-f58fc" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920160 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-stats-auth\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920176 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-serving-cert\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920191 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7e8d817d-9152-48c4-b7b0-f9df76891753-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g2crb\" (UID: \"7e8d817d-9152-48c4-b7b0-f9df76891753\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920206 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc9d2\" (UniqueName: \"kubernetes.io/projected/7e8d817d-9152-48c4-b7b0-f9df76891753-kube-api-access-mc9d2\") pod \"olm-operator-6b444d44fb-g2crb\" (UID: \"7e8d817d-9152-48c4-b7b0-f9df76891753\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920237 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-mountpoint-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920253 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pfkz\" (UniqueName: \"kubernetes.io/projected/0e4e6fc1-bf89-455e-8409-31ba869ffdf1-kube-api-access-4pfkz\") pod \"multus-admission-controller-857f4d67dd-bfk9k\" (UID: \"0e4e6fc1-bf89-455e-8409-31ba869ffdf1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920270 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9lfg\" (UniqueName: \"kubernetes.io/projected/bfa9a143-ca0d-4f32-b9a7-b2acb327bedc-kube-api-access-h9lfg\") pod \"control-plane-machine-set-operator-78cbb6b69f-sjskb\" (UID: \"bfa9a143-ca0d-4f32-b9a7-b2acb327bedc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920288 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82577eb4-f869-4200-b5b4-929920b4272a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfxdf\" (UID: \"82577eb4-f869-4200-b5b4-929920b4272a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920318 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-etcd-client\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.920335 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/748aaca7-daf1-4bd8-b397-1b3c6eedbc4a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hfdtw\" (UID: \"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.921164 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-config\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.921946 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748aaca7-daf1-4bd8-b397-1b3c6eedbc4a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hfdtw\" (UID: \"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:39 crc kubenswrapper[4625]: E1202 13:46:39.922446 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:40.42241613 +0000 UTC m=+156.384593205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.922691 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8pwk\" (UniqueName: \"kubernetes.io/projected/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-kube-api-access-p8pwk\") pod \"console-f9d7485db-pr728\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.923435 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-etcd-service-ca\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.924616 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4nv\" (UniqueName: \"kubernetes.io/projected/35ca6406-d63b-41a2-9217-85afd26abacd-kube-api-access-kg4nv\") pod \"cluster-samples-operator-665b6dd947-bnk2h\" (UID: \"35ca6406-d63b-41a2-9217-85afd26abacd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.925446 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8bad892-59d1-45b5-a388-156353675860-config-volume\") pod \"collect-profiles-29411385-jvp44\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.925657 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d9ab4834-e296-4113-ad72-c1e6c86b3ee6-signing-cabundle\") pod \"service-ca-9c57cc56f-66bnq\" (UID: \"d9ab4834-e296-4113-ad72-c1e6c86b3ee6\") " pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.926395 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0f73e926-f5cf-46b3-afc7-0fe387cf5704-certs\") pod \"machine-config-server-8m8hp\" (UID: \"0f73e926-f5cf-46b3-afc7-0fe387cf5704\") " pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.931730 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ed22c8-2d67-46b7-91e9-292293517801-config\") pod \"service-ca-operator-777779d784-qcgnq\" (UID: \"87ed22c8-2d67-46b7-91e9-292293517801\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.932712 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hm5k5\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.933303 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/689505d7-9623-458a-b60a-e584c405540d-metrics-tls\") pod \"dns-default-hrfrv\" (UID: \"689505d7-9623-458a-b60a-e584c405540d\") " pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.933584 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hm5k5\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.937144 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-etcd-client\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.938991 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/47088586-bc86-4ee1-99db-31eb2eb14ffc-images\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.939474 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7490ca6-a9c3-4103-98e8-951179079bf7-cert\") pod \"ingress-canary-f58fc\" (UID: \"a7490ca6-a9c3-4103-98e8-951179079bf7\") " pod="openshift-ingress-canary/ingress-canary-f58fc" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.939879 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-registration-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.949443 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/748aaca7-daf1-4bd8-b397-1b3c6eedbc4a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hfdtw\" (UID: \"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.950071 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/47088586-bc86-4ee1-99db-31eb2eb14ffc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.950517 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-etcd-ca\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.951537 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-socket-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.951657 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.952869 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-plugins-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.953617 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/689505d7-9623-458a-b60a-e584c405540d-config-volume\") pod \"dns-default-hrfrv\" (UID: \"689505d7-9623-458a-b60a-e584c405540d\") " pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.953700 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0f73e926-f5cf-46b3-afc7-0fe387cf5704-node-bootstrap-token\") pod \"machine-config-server-8m8hp\" (UID: \"0f73e926-f5cf-46b3-afc7-0fe387cf5704\") " pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.954244 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/748aaca7-daf1-4bd8-b397-1b3c6eedbc4a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hfdtw\" (UID: \"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.955884 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7c402f2a-3e9f-4eba-a881-a59ae3626f5a-profile-collector-cert\") pod \"catalog-operator-68c6474976-qcjf2\" (UID: \"7c402f2a-3e9f-4eba-a881-a59ae3626f5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.958528 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8bad892-59d1-45b5-a388-156353675860-secret-volume\") pod \"collect-profiles-29411385-jvp44\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.959147 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b067825c-8b40-4b11-bf8b-52ebf31ec4ba-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wqc6c\" (UID: \"b067825c-8b40-4b11-bf8b-52ebf31ec4ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.965815 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-mountpoint-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.966137 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/47088586-bc86-4ee1-99db-31eb2eb14ffc-proxy-tls\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.969113 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-metrics-certs\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.972051 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-service-ca-bundle\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.975705 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f328b6e0-83f8-4ef2-abed-fe6dbabab077-csi-data-dir\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.976790 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.981820 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-default-certificate\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.982528 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p57n\" (UniqueName: \"kubernetes.io/projected/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-kube-api-access-8p57n\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.982802 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.984188 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955cd\" (UniqueName: \"kubernetes.io/projected/b067825c-8b40-4b11-bf8b-52ebf31ec4ba-kube-api-access-955cd\") pod \"package-server-manager-789f6589d5-wqc6c\" (UID: \"b067825c-8b40-4b11-bf8b-52ebf31ec4ba\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" Dec 02 13:46:39 crc kubenswrapper[4625]: I1202 13:46:39.984356 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnpd4\" (UniqueName: \"kubernetes.io/projected/4065d249-ffb1-406a-9e88-b6b97cf70f2a-kube-api-access-bnpd4\") pod \"marketplace-operator-79b997595-hm5k5\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.028849 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: E1202 13:46:40.029288 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:40.529274716 +0000 UTC m=+156.491451791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.029380 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.032097 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7c402f2a-3e9f-4eba-a881-a59ae3626f5a-srv-cert\") pod \"catalog-operator-68c6474976-qcjf2\" (UID: \"7c402f2a-3e9f-4eba-a881-a59ae3626f5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.033099 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d9ab4834-e296-4113-ad72-c1e6c86b3ee6-signing-key\") pod \"service-ca-9c57cc56f-66bnq\" (UID: \"d9ab4834-e296-4113-ad72-c1e6c86b3ee6\") " pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.034860 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb264\" (UniqueName: \"kubernetes.io/projected/87ed22c8-2d67-46b7-91e9-292293517801-kube-api-access-jb264\") pod \"service-ca-operator-777779d784-qcgnq\" (UID: \"87ed22c8-2d67-46b7-91e9-292293517801\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.035166 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef408b1b-6bae-454b-b9a2-3dd62ffcacf2-serving-cert\") pod \"etcd-operator-b45778765-9zdhl\" (UID: \"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.035702 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7cnv\" (UniqueName: \"kubernetes.io/projected/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-kube-api-access-f7cnv\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.036102 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5ca4e0fc-6aab-4f08-afdf-d61583c63f6f-stats-auth\") pod \"router-default-5444994796-pqzl9\" (UID: \"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f\") " pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.036719 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lztlf\" (UniqueName: \"kubernetes.io/projected/d9ab4834-e296-4113-ad72-c1e6c86b3ee6-kube-api-access-lztlf\") pod \"service-ca-9c57cc56f-66bnq\" (UID: \"d9ab4834-e296-4113-ad72-c1e6c86b3ee6\") " pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.037708 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bssbg\" (UniqueName: \"kubernetes.io/projected/c8bad892-59d1-45b5-a388-156353675860-kube-api-access-bssbg\") pod \"collect-profiles-29411385-jvp44\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.038483 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdbs9\" (UniqueName: \"kubernetes.io/projected/7c402f2a-3e9f-4eba-a881-a59ae3626f5a-kube-api-access-jdbs9\") pod \"catalog-operator-68c6474976-qcjf2\" (UID: \"7c402f2a-3e9f-4eba-a881-a59ae3626f5a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.041625 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzft7\" (UniqueName: \"kubernetes.io/projected/f328b6e0-83f8-4ef2-abed-fe6dbabab077-kube-api-access-bzft7\") pod \"csi-hostpathplugin-dxltq\" (UID: \"f328b6e0-83f8-4ef2-abed-fe6dbabab077\") " pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.042644 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gbvj\" (UniqueName: \"kubernetes.io/projected/a7490ca6-a9c3-4103-98e8-951179079bf7-kube-api-access-6gbvj\") pod \"ingress-canary-f58fc\" (UID: \"a7490ca6-a9c3-4103-98e8-951179079bf7\") " pod="openshift-ingress-canary/ingress-canary-f58fc" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.047663 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f58fc" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.050931 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52jj5\" (UniqueName: \"kubernetes.io/projected/47088586-bc86-4ee1-99db-31eb2eb14ffc-kube-api-access-52jj5\") pod \"machine-config-operator-74547568cd-r96rb\" (UID: \"47088586-bc86-4ee1-99db-31eb2eb14ffc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.100591 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.126639 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.130362 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:40 crc kubenswrapper[4625]: E1202 13:46:40.130831 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:40.630812436 +0000 UTC m=+156.592989511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.160426 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-tmpfs\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.160921 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e4e6fc1-bf89-455e-8409-31ba869ffdf1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bfk9k\" (UID: \"0e4e6fc1-bf89-455e-8409-31ba869ffdf1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.161440 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pfkz\" (UniqueName: \"kubernetes.io/projected/0e4e6fc1-bf89-455e-8409-31ba869ffdf1-kube-api-access-4pfkz\") pod \"multus-admission-controller-857f4d67dd-bfk9k\" (UID: \"0e4e6fc1-bf89-455e-8409-31ba869ffdf1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.161760 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjt9v\" (UniqueName: \"kubernetes.io/projected/689505d7-9623-458a-b60a-e584c405540d-kube-api-access-hjt9v\") pod \"dns-default-hrfrv\" (UID: \"689505d7-9623-458a-b60a-e584c405540d\") " pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.163644 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.165569 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87ed22c8-2d67-46b7-91e9-292293517801-serving-cert\") pod \"service-ca-operator-777779d784-qcgnq\" (UID: \"87ed22c8-2d67-46b7-91e9-292293517801\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.165562 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7e8d817d-9152-48c4-b7b0-f9df76891753-srv-cert\") pod \"olm-operator-6b444d44fb-g2crb\" (UID: \"7e8d817d-9152-48c4-b7b0-f9df76891753\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.168303 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7e8d817d-9152-48c4-b7b0-f9df76891753-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g2crb\" (UID: \"7e8d817d-9152-48c4-b7b0-f9df76891753\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.171433 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc9d2\" (UniqueName: \"kubernetes.io/projected/7e8d817d-9152-48c4-b7b0-f9df76891753-kube-api-access-mc9d2\") pod \"olm-operator-6b444d44fb-g2crb\" (UID: \"7e8d817d-9152-48c4-b7b0-f9df76891753\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.172083 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c87d97fb-8391-4f0f-8b3d-a404721de262-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.172793 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfa9a143-ca0d-4f32-b9a7-b2acb327bedc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sjskb\" (UID: \"bfa9a143-ca0d-4f32-b9a7-b2acb327bedc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.173480 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-apiservice-cert\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.173592 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-webhook-cert\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.174234 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf2cc\" (UniqueName: \"kubernetes.io/projected/2ec55e1a-74d5-4c19-abde-2b8d8e9f392c-kube-api-access-gf2cc\") pod \"packageserver-d55dfcdfc-9vd9w\" (UID: \"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.175152 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.177692 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82577eb4-f869-4200-b5b4-929920b4272a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfxdf\" (UID: \"82577eb4-f869-4200-b5b4-929920b4272a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.186856 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.192001 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-certificates\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.194032 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-trusted-ca\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.194251 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82577eb4-f869-4200-b5b4-929920b4272a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfxdf\" (UID: \"82577eb4-f869-4200-b5b4-929920b4272a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.194284 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.196601 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c87d97fb-8391-4f0f-8b3d-a404721de262-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.200132 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbnxv\" (UniqueName: \"kubernetes.io/projected/82577eb4-f869-4200-b5b4-929920b4272a-kube-api-access-zbnxv\") pod \"kube-storage-version-migrator-operator-b67b599dd-kfxdf\" (UID: \"82577eb4-f869-4200-b5b4-929920b4272a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.201112 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twd8l\" (UniqueName: \"kubernetes.io/projected/0f73e926-f5cf-46b3-afc7-0fe387cf5704-kube-api-access-twd8l\") pod \"machine-config-server-8m8hp\" (UID: \"0f73e926-f5cf-46b3-afc7-0fe387cf5704\") " pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.201488 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.204270 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpsks\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-kube-api-access-wpsks\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.208082 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9lfg\" (UniqueName: \"kubernetes.io/projected/bfa9a143-ca0d-4f32-b9a7-b2acb327bedc-kube-api-access-h9lfg\") pod \"control-plane-machine-set-operator-78cbb6b69f-sjskb\" (UID: \"bfa9a143-ca0d-4f32-b9a7-b2acb327bedc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.215181 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-bound-sa-token\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.261843 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: E1202 13:46:40.263025 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:40.763009381 +0000 UTC m=+156.725186456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.264816 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.271236 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.274979 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.277744 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.289127 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" podStartSLOduration=134.28909499 podStartE2EDuration="2m14.28909499s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:40.215076238 +0000 UTC m=+156.177253313" watchObservedRunningTime="2025-12-02 13:46:40.28909499 +0000 UTC m=+156.251272075" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.294730 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8m8hp" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.296617 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.306751 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.309900 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dsfpw"] Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.310132 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dhfc6" podStartSLOduration=135.310104782 podStartE2EDuration="2m15.310104782s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:40.290393646 +0000 UTC m=+156.252570721" watchObservedRunningTime="2025-12-02 13:46:40.310104782 +0000 UTC m=+156.272281867" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.310296 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.324442 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-h2nf9"] Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.332987 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.334501 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dxltq" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.340257 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.362967 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:40 crc kubenswrapper[4625]: E1202 13:46:40.363462 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:40.863396221 +0000 UTC m=+156.825573296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.391340 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" Dec 02 13:46:40 crc kubenswrapper[4625]: W1202 13:46:40.431610 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf086744a_9c7e_46bc_b05a_cce4599e47aa.slice/crio-3d545fe01554989fe84c3eb5adb33485cd548efab90973b5f9b2f8d929125daf WatchSource:0}: Error finding container 3d545fe01554989fe84c3eb5adb33485cd548efab90973b5f9b2f8d929125daf: Status 404 returned error can't find the container with id 3d545fe01554989fe84c3eb5adb33485cd548efab90973b5f9b2f8d929125daf Dec 02 13:46:40 crc kubenswrapper[4625]: W1202 13:46:40.442591 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48f0e27b_d4d8_4118_98af_e6fa04663c27.slice/crio-a5b8798205c55d245c545a6969bc0c37dfbe61429efa2a3b9f79addf11c482a3 WatchSource:0}: Error finding container a5b8798205c55d245c545a6969bc0c37dfbe61429efa2a3b9f79addf11c482a3: Status 404 returned error can't find the container with id a5b8798205c55d245c545a6969bc0c37dfbe61429efa2a3b9f79addf11c482a3 Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.464907 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: E1202 13:46:40.465277 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:40.965248351 +0000 UTC m=+156.927425426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.466443 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.476387 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" event={"ID":"f086744a-9c7e-46bc-b05a-cce4599e47aa","Type":"ContainerStarted","Data":"3d545fe01554989fe84c3eb5adb33485cd548efab90973b5f9b2f8d929125daf"} Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.477270 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" event={"ID":"b4b56821-f503-4316-b494-6c53ea6037b4","Type":"ContainerStarted","Data":"2aa66b805b14f656f0c39acaf6e6417a06d590fa196d188519473b1674172947"} Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.479273 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" event={"ID":"48f0e27b-d4d8-4118-98af-e6fa04663c27","Type":"ContainerStarted","Data":"a5b8798205c55d245c545a6969bc0c37dfbe61429efa2a3b9f79addf11c482a3"} Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.505629 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" event={"ID":"30f3fae9-f4d5-4f32-9498-5d2a2d801654","Type":"ContainerStarted","Data":"d37b3433b4b5da2e4c78107f8156a2751a51771865414eb8cba57456a2391988"} Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.566015 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:40 crc kubenswrapper[4625]: E1202 13:46:40.566617 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:41.066598236 +0000 UTC m=+157.028775321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.590871 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pqzl9" event={"ID":"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f","Type":"ContainerStarted","Data":"0ee5188d7c6901825bb9261b8aa9c28414609767ccd5b9648cf2b895e83ca688"} Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.592473 4625 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rxs7k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.592511 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" podUID="875633b5-d52b-4d18-9322-dfbe2d73aed4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.592581 4625 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ctx7m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.592623 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" podUID="1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.670001 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: E1202 13:46:40.670899 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:41.170884523 +0000 UTC m=+157.133061598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.670914 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" podStartSLOduration=135.670899593 podStartE2EDuration="2m15.670899593s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:40.647971709 +0000 UTC m=+156.610148794" watchObservedRunningTime="2025-12-02 13:46:40.670899593 +0000 UTC m=+156.633076668" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.721993 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-p4l8q" podStartSLOduration=134.721978762 podStartE2EDuration="2m14.721978762s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:40.721457958 +0000 UTC m=+156.683635033" watchObservedRunningTime="2025-12-02 13:46:40.721978762 +0000 UTC m=+156.684155837" Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.754923 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vz5q2"] Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.770914 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:40 crc kubenswrapper[4625]: E1202 13:46:40.771968 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:41.271952111 +0000 UTC m=+157.234129176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.773929 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz"] Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.777933 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5"] Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.799643 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2"] Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.806645 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5sq66"] Dec 02 13:46:40 crc kubenswrapper[4625]: I1202 13:46:40.875197 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:40 crc kubenswrapper[4625]: E1202 13:46:40.875643 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:41.3756312 +0000 UTC m=+157.337808275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.037848 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:41 crc kubenswrapper[4625]: E1202 13:46:41.038583 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:41.538566711 +0000 UTC m=+157.500743786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.039627 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv"] Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.139635 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:41 crc kubenswrapper[4625]: E1202 13:46:41.140404 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:41.640389049 +0000 UTC m=+157.602566124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.174944 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tjbfd"] Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.268072 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x8tnt"] Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.280681 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh"] Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.283422 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt"] Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.287047 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:41 crc kubenswrapper[4625]: E1202 13:46:41.290781 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:41.790745438 +0000 UTC m=+157.752922523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:41 crc kubenswrapper[4625]: W1202 13:46:41.348735 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fcdeff4_0e89_4a89_bb26_53bae5e6a0ef.slice/crio-0b0cfac7b39ec87daa739bb672a52bd2bf17383bac5519784230a5b2504cf953 WatchSource:0}: Error finding container 0b0cfac7b39ec87daa739bb672a52bd2bf17383bac5519784230a5b2504cf953: Status 404 returned error can't find the container with id 0b0cfac7b39ec87daa739bb672a52bd2bf17383bac5519784230a5b2504cf953 Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.393772 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:41 crc kubenswrapper[4625]: E1202 13:46:41.394152 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:41.894137159 +0000 UTC m=+157.856314234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.497265 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:41 crc kubenswrapper[4625]: E1202 13:46:41.497738 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:41.997720836 +0000 UTC m=+157.959897911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.515021 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pr728"] Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.694186 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:41 crc kubenswrapper[4625]: E1202 13:46:41.694812 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:42.194795285 +0000 UTC m=+158.156972360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.754643 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5sq66" event={"ID":"637baf2f-239a-405a-8cde-a46bf3f7877d","Type":"ContainerStarted","Data":"7cccdbe83f9f4507088c95d08c4c6a6f8d1e26d499e442332cc1bb759fdefb14"} Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.768281 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" event={"ID":"56474967-807c-4a3e-8037-45dfb0b88fe2","Type":"ContainerStarted","Data":"4b52beddf23179e47ff8dccc085c5454f85df36edb2aa7f38f09cc2527f1e19d"} Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.774132 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" event={"ID":"523dde62-fffc-455c-986c-9b69306a6225","Type":"ContainerStarted","Data":"ae8d5183a2fab46d2a276a1d0291703a7b60a509fc3598932e68bfcd2c969e78"} Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.776377 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" event={"ID":"b76c3489-3b8c-4e02-b757-00e290f24fc9","Type":"ContainerStarted","Data":"0a945a8dd99b2fc77ffdcaf582f4ac8481dbf295f9f44272ae0a171796aa2b6c"} Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.820607 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5" event={"ID":"78ff7018-4a24-46de-b71b-6a4fb8b8b8ee","Type":"ContainerStarted","Data":"0d088aef2f6fe48862c37723a6cbbdd44a5aab325a299699f5676ca5e9664331"} Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.821349 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" event={"ID":"6c0a10a7-ccfe-45a2-8b74-df21b80d67df","Type":"ContainerStarted","Data":"7736fc2f56198e586f392d64669cdf0fb00393755a7358684eaadbd86c8c62e0"} Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.876246 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.880139 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" event={"ID":"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef","Type":"ContainerStarted","Data":"0b0cfac7b39ec87daa739bb672a52bd2bf17383bac5519784230a5b2504cf953"} Dec 02 13:46:41 crc kubenswrapper[4625]: E1202 13:46:41.880442 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:42.380399982 +0000 UTC m=+158.342577057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.881369 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" event={"ID":"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15","Type":"ContainerStarted","Data":"56700a4a051928625bc1e47bf47fdb3c70b16f1b898cd688a188cf832d7a29dc"} Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.883662 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" event={"ID":"b4b56821-f503-4316-b494-6c53ea6037b4","Type":"ContainerStarted","Data":"072a06beee9336812095172dd01eede436e232eae468b3f86d4c6310ed513f9b"} Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.887474 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:41 crc kubenswrapper[4625]: E1202 13:46:41.913954 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:42.413931804 +0000 UTC m=+158.376108879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.956078 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" event={"ID":"244411d4-7a54-4ce6-9eb8-cbaa12838fc0","Type":"ContainerStarted","Data":"f0ea2b2b6fa8d0f8050339529e1c1b45f9518eb84c7eff095f02534dcd332ea7"} Dec 02 13:46:41 crc kubenswrapper[4625]: I1202 13:46:41.958038 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" event={"ID":"935b4d1d-2bd9-4594-a23b-d823402ac019","Type":"ContainerStarted","Data":"f417a8e7cda251c8f838f9c5c471115b36cf353472712dbf3d4e8ef62dcf4f7b"} Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.083278 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:42 crc kubenswrapper[4625]: E1202 13:46:42.084723 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:42.584706378 +0000 UTC m=+158.546883453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.193043 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:42 crc kubenswrapper[4625]: E1202 13:46:42.193409 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:42.693394513 +0000 UTC m=+158.655571588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.293838 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:42 crc kubenswrapper[4625]: E1202 13:46:42.294216 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:42.794202114 +0000 UTC m=+158.756379189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.397164 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:42 crc kubenswrapper[4625]: E1202 13:46:42.397526 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:42.897514383 +0000 UTC m=+158.859691458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.497900 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:42 crc kubenswrapper[4625]: E1202 13:46:42.498890 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:42.998874699 +0000 UTC m=+158.961051774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.600565 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:42 crc kubenswrapper[4625]: E1202 13:46:42.600926 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:43.100911544 +0000 UTC m=+159.063088619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.703114 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:42 crc kubenswrapper[4625]: E1202 13:46:42.713914 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:43.213880696 +0000 UTC m=+159.176057771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.729000 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2wn5" podStartSLOduration=137.728979607 podStartE2EDuration="2m17.728979607s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:42.043842206 +0000 UTC m=+158.006019281" watchObservedRunningTime="2025-12-02 13:46:42.728979607 +0000 UTC m=+158.691156682" Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.732662 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll"] Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.815172 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:42 crc kubenswrapper[4625]: E1202 13:46:42.816815 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:43.316794444 +0000 UTC m=+159.278971519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:42 crc kubenswrapper[4625]: I1202 13:46:42.919737 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:42 crc kubenswrapper[4625]: E1202 13:46:42.920111 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:43.420092763 +0000 UTC m=+159.382269838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.005051 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pr728" event={"ID":"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5","Type":"ContainerStarted","Data":"271711ab94655d0bd69f32a28bc363fd39153e4f1f890c7beec59880ed923370"} Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.022730 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" event={"ID":"4667cfe8-6ad8-461f-9e16-79a64a33642b","Type":"ContainerStarted","Data":"fed58b4e6a2f7fc7c13e6c3cb7d1231a2f75b5778f62725b076b639d24dfbd66"} Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.040549 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:43 crc kubenswrapper[4625]: E1202 13:46:43.040838 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:43.540827616 +0000 UTC m=+159.503004691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.068474 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" event={"ID":"9da966a1-56e1-4805-b03d-97b9d2ca467a","Type":"ContainerStarted","Data":"b41fabe54826d0d52cfad6684eeebf4999c28cd21928c63535b2f2705f31afbd"} Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.072850 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8m8hp" event={"ID":"0f73e926-f5cf-46b3-afc7-0fe387cf5704","Type":"ContainerStarted","Data":"9f5a338af68b727ab0a6d00e2f53bf34cc57eea76bd9c088ecaab51530cd6297"} Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.141154 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:43 crc kubenswrapper[4625]: E1202 13:46:43.141765 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:43.6417322 +0000 UTC m=+159.603909275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.142106 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:43 crc kubenswrapper[4625]: E1202 13:46:43.142441 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:43.642431899 +0000 UTC m=+159.604609014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.243150 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:43 crc kubenswrapper[4625]: E1202 13:46:43.243497 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:43.743481507 +0000 UTC m=+159.705658582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.346086 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:43 crc kubenswrapper[4625]: E1202 13:46:43.346479 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:43.846466588 +0000 UTC m=+159.808643663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.448662 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:43 crc kubenswrapper[4625]: E1202 13:46:43.449179 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:43.94915995 +0000 UTC m=+159.911337025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.550557 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:43 crc kubenswrapper[4625]: E1202 13:46:43.551907 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:44.051892153 +0000 UTC m=+160.014069228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.639544 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb"] Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.651524 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:43 crc kubenswrapper[4625]: E1202 13:46:43.651942 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:44.151922994 +0000 UTC m=+160.114100069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.734205 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw"] Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.741935 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hrfrv"] Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.756577 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:43 crc kubenswrapper[4625]: E1202 13:46:43.756932 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:44.256919579 +0000 UTC m=+160.219096654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.887464 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w"] Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.888204 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:43 crc kubenswrapper[4625]: E1202 13:46:43.888451 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:44.388436795 +0000 UTC m=+160.350613870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:43 crc kubenswrapper[4625]: I1202 13:46:43.891021 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h"] Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.003049 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:44 crc kubenswrapper[4625]: E1202 13:46:44.003421 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:44.503407321 +0000 UTC m=+160.465584396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.108287 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:44 crc kubenswrapper[4625]: E1202 13:46:44.111115 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:44.611087289 +0000 UTC m=+160.573264364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.112974 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f58fc"] Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.123417 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.141898 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pqzl9" event={"ID":"5ca4e0fc-6aab-4f08-afdf-d61583c63f6f","Type":"ContainerStarted","Data":"a9f04d7c99424fb1fe3f5afcd155a7b18f83f56e18ebe37b90db8537084d61a2"} Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.154620 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pr728" event={"ID":"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5","Type":"ContainerStarted","Data":"f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529"} Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.171296 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2"] Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.187156 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" event={"ID":"48f0e27b-d4d8-4118-98af-e6fa04663c27","Type":"ContainerStarted","Data":"5a9bd7c2a9a9879c5a1ef7c92740626e5da9cac0f85f8d31d34d5817f36b580c"} Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.187764 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.207868 4625 patch_prober.go:28] interesting pod/console-operator-58897d9998-h2nf9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.207937 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" podUID="48f0e27b-d4d8-4118-98af-e6fa04663c27" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.209619 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:44 crc kubenswrapper[4625]: E1202 13:46:44.209964 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:44.709950267 +0000 UTC m=+160.672127342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.221218 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5sq66" event={"ID":"637baf2f-239a-405a-8cde-a46bf3f7877d","Type":"ContainerStarted","Data":"0dc7b6a8a6cf9d738fa028fb17aa963e4e95235ea0dce8d557b9f2d4005387b2"} Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.221772 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.243747 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd"] Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.249758 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.249819 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.263454 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" event={"ID":"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef","Type":"ContainerStarted","Data":"3e8e2f2276e7f46044e25a0a5dded4eaf9047aa9145c0b37c65ea80c15807c46"} Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.263883 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pqzl9" podStartSLOduration=138.263863434 podStartE2EDuration="2m18.263863434s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:44.263854683 +0000 UTC m=+160.226031778" watchObservedRunningTime="2025-12-02 13:46:44.263863434 +0000 UTC m=+160.226040509" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.290525 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" event={"ID":"56474967-807c-4a3e-8037-45dfb0b88fe2","Type":"ContainerStarted","Data":"c367a9bf3f1e0e9a6d1a4f58c11726693ed797f2f4fd800b3bc4ea0dc15c5c99"} Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.293288 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.303352 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" event={"ID":"4667cfe8-6ad8-461f-9e16-79a64a33642b","Type":"ContainerStarted","Data":"0bfc575e802c39d4ad526f03ea41a84e2095a409be4461481a4fbd883c8ff028"} Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.321194 4625 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-x8tnt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.321256 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" podUID="56474967-807c-4a3e-8037-45dfb0b88fe2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.321908 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" event={"ID":"244411d4-7a54-4ce6-9eb8-cbaa12838fc0","Type":"ContainerStarted","Data":"06bcd7bb8e3f583a0ac43a7ecb8e47070e4baae91c681e9d354b158ffbc7429b"} Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.324445 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:44 crc kubenswrapper[4625]: E1202 13:46:44.324570 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:44.824542573 +0000 UTC m=+160.786719648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.325329 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:44 crc kubenswrapper[4625]: E1202 13:46:44.332543 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:44.83252425 +0000 UTC m=+160.794701405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.350964 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-pr728" podStartSLOduration=139.350940191 podStartE2EDuration="2m19.350940191s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:44.341821314 +0000 UTC m=+160.303998389" watchObservedRunningTime="2025-12-02 13:46:44.350940191 +0000 UTC m=+160.313117266" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.351485 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" podStartSLOduration=139.351477365 podStartE2EDuration="2m19.351477365s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:44.310557603 +0000 UTC m=+160.272734668" watchObservedRunningTime="2025-12-02 13:46:44.351477365 +0000 UTC m=+160.313654440" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.361921 4625 generic.go:334] "Generic (PLEG): container finished" podID="935b4d1d-2bd9-4594-a23b-d823402ac019" containerID="62324aa30b6a5443e13c05ff26e6310eb2422e2ac3665f3b01505f2917fd3388" exitCode=0 Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.362092 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" event={"ID":"935b4d1d-2bd9-4594-a23b-d823402ac019","Type":"ContainerDied","Data":"62324aa30b6a5443e13c05ff26e6310eb2422e2ac3665f3b01505f2917fd3388"} Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.438391 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hm5k5"] Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.439532 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.449155 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" event={"ID":"b76c3489-3b8c-4e02-b757-00e290f24fc9","Type":"ContainerStarted","Data":"19b0dfb3edd874ec87fce307b23545042a3f352fc6ca4ff6c1a95a33c332a839"} Dec 02 13:46:44 crc kubenswrapper[4625]: E1202 13:46:44.458298 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:44.95826725 +0000 UTC m=+160.920444325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.471891 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gndll" podStartSLOduration=138.471868489 podStartE2EDuration="2m18.471868489s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:44.471855609 +0000 UTC m=+160.434032684" watchObservedRunningTime="2025-12-02 13:46:44.471868489 +0000 UTC m=+160.434045564" Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.482718 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8m8hp" event={"ID":"0f73e926-f5cf-46b3-afc7-0fe387cf5704","Type":"ContainerStarted","Data":"00d1402aca5201100f893149a7d4b6d6a6b91902086a69eeb653c06c17fe6a63"} Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.612624 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:44 crc kubenswrapper[4625]: E1202 13:46:44.615620 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:45.115592358 +0000 UTC m=+161.077769603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:44 crc kubenswrapper[4625]: I1202 13:46:44.654623 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" event={"ID":"f086744a-9c7e-46bc-b05a-cce4599e47aa","Type":"ContainerStarted","Data":"c8c11417a4a06bfa44349f59f3078a461aca13391a76362b269200b7722fa56a"} Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:44.724266 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wxnpz" podStartSLOduration=138.724246893 podStartE2EDuration="2m18.724246893s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:44.611872327 +0000 UTC m=+160.574049402" watchObservedRunningTime="2025-12-02 13:46:44.724246893 +0000 UTC m=+160.686423968" Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:44.726593 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" event={"ID":"523dde62-fffc-455c-986c-9b69306a6225","Type":"ContainerStarted","Data":"58295afcc942b05db4eceda606b21ed4a5bb55f41ce4603c2e1aff9162384117"} Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:44.727244 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:44.730737 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:45.230718308 +0000 UTC m=+161.192895383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:44.748405 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" podStartSLOduration=139.748389589 podStartE2EDuration="2m19.748389589s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:44.726580865 +0000 UTC m=+160.688757930" watchObservedRunningTime="2025-12-02 13:46:44.748389589 +0000 UTC m=+160.710566684" Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:44.748931 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb"] Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:44.883810 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:44.884406 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:45.384388587 +0000 UTC m=+161.346565662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:44.907586 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5sq66" podStartSLOduration=139.907563347 podStartE2EDuration="2m19.907563347s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:44.795582052 +0000 UTC m=+160.757759127" watchObservedRunningTime="2025-12-02 13:46:44.907563347 +0000 UTC m=+160.869740422" Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:44.973596 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8m8hp" podStartSLOduration=9.973578952 podStartE2EDuration="9.973578952s" podCreationTimestamp="2025-12-02 13:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:44.972562594 +0000 UTC m=+160.934739669" watchObservedRunningTime="2025-12-02 13:46:44.973578952 +0000 UTC m=+160.935756017" Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:44.985815 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:44.986930 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:45.486913634 +0000 UTC m=+161.449090709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.097614 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:45.099168 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:45.599156226 +0000 UTC m=+161.561333301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.199289 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:45.199853 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:45.699826134 +0000 UTC m=+161.662003209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.201609 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:45 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:45 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:45 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.201657 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.302003 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:45.302465 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:45.802447675 +0000 UTC m=+161.764624750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.349888 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vz5q2" podStartSLOduration=140.349853664 podStartE2EDuration="2m20.349853664s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:45.247694446 +0000 UTC m=+161.209871531" watchObservedRunningTime="2025-12-02 13:46:45.349853664 +0000 UTC m=+161.312030749" Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.400678 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.400728 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" event={"ID":"6c0a10a7-ccfe-45a2-8b74-df21b80d67df","Type":"ContainerStarted","Data":"f39ef1c2b57ea0846f773cbade870bcd0ad394c8dd276d563c6e9fc4a30dbcdb"} Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.400764 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5" event={"ID":"78ff7018-4a24-46de-b71b-6a4fb8b8b8ee","Type":"ContainerStarted","Data":"c715779d6888a6c87c53bd0c5a0fcab5c2b6994a2aaa31ba575f93be42d54a43"} Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.400779 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" event={"ID":"9da966a1-56e1-4805-b03d-97b9d2ca467a","Type":"ContainerStarted","Data":"c1767afc3ef11a6bbdc9f86b5291178c56023284c56594148c1a7ef084cbcdca"} Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.400795 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9zdhl"] Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.400812 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq"] Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.400831 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44"] Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.400847 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c"] Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.402944 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:45.407750 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:45.907726178 +0000 UTC m=+161.869903253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.435499 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb"] Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.438161 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dxltq"] Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.459460 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9xkt2" podStartSLOduration=139.459431444 podStartE2EDuration="2m19.459431444s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:45.370112085 +0000 UTC m=+161.332289160" watchObservedRunningTime="2025-12-02 13:46:45.459431444 +0000 UTC m=+161.421608509" Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.485771 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-66bnq"] Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.547380 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bfk9k"] Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.559969 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5" podStartSLOduration=139.559938177 podStartE2EDuration="2m19.559938177s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:45.468359326 +0000 UTC m=+161.430536411" watchObservedRunningTime="2025-12-02 13:46:45.559938177 +0000 UTC m=+161.522115262" Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.589374 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf"] Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.590525 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:45.590893 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.090876028 +0000 UTC m=+162.053053103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.691613 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:45.692210 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.192184233 +0000 UTC m=+162.154361308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.794758 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:45.795071 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.295057579 +0000 UTC m=+162.257234654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:45 crc kubenswrapper[4625]: I1202 13:46:45.897796 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:45 crc kubenswrapper[4625]: E1202 13:46:45.898195 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.398180524 +0000 UTC m=+162.360357599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.005016 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.006076 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.506051437 +0000 UTC m=+162.468228512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.080481 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" event={"ID":"d9ab4834-e296-4113-ad72-c1e6c86b3ee6","Type":"ContainerStarted","Data":"abca1a78a267bf8e0fcbc006f4e065a013c2c4e8d23df3d9d6de82717d9cac8e"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.082718 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" event={"ID":"b28f8782-c7d3-4034-b269-90be9cbd9eec","Type":"ContainerStarted","Data":"16bf1cb98be5559c4d0c52f3745787dcd845722528cebbba9284060689febae4"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.083959 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" event={"ID":"4065d249-ffb1-406a-9e88-b6b97cf70f2a","Type":"ContainerStarted","Data":"d763f5317fa42a4122c1cca0d85e09ee3b5941821b6598a434175d82d1f84669"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.113679 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.113809 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.613791657 +0000 UTC m=+162.575968732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.113930 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.114477 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.614456356 +0000 UTC m=+162.576633431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.121707 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:46 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:46 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:46 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.121763 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.122135 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" event={"ID":"b067825c-8b40-4b11-bf8b-52ebf31ec4ba","Type":"ContainerStarted","Data":"866947b3d3258c2e741ab497cea6a94eeaa268a7530015134711da1d5c0c480c"} Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.217421 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.717393204 +0000 UTC m=+162.679570289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.217284 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.217822 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.218275 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.718252428 +0000 UTC m=+162.680429503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.240768 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" event={"ID":"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2","Type":"ContainerStarted","Data":"dc507efdeb803a83b5381fee86fd2f33a6504d7200c940980e975e44fc0e6e40"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.246453 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f58fc" event={"ID":"a7490ca6-a9c3-4103-98e8-951179079bf7","Type":"ContainerStarted","Data":"3c93cd75c07115a6ebd714c2dfaa532ce3c82ec8b94f95dc05c4b3fb3f384fe3"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.252242 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" event={"ID":"7e8d817d-9152-48c4-b7b0-f9df76891753","Type":"ContainerStarted","Data":"7f9fbef6f8e8e9f9cf0677d2c4460a4dc9585d1f44703bdb4be3b5d33ab55019"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.266478 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" event={"ID":"87ed22c8-2d67-46b7-91e9-292293517801","Type":"ContainerStarted","Data":"b9c39eec9f30a82ef2e9269563571ced0f1c98f87f6c7ffbf2ffeba4255ca07f"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.278538 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" event={"ID":"7c402f2a-3e9f-4eba-a881-a59ae3626f5a","Type":"ContainerStarted","Data":"36936d08085db3b89dbf55ad2b34c46be4a2c0419fdd7012c2047ef2aa3116df"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.282760 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.295740 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" event={"ID":"6c0a10a7-ccfe-45a2-8b74-df21b80d67df","Type":"ContainerStarted","Data":"fdf5678a252d283b0a7ce60474c4f17a9016a9dfe2016fce033f1df8ccaac571"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.300994 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" event={"ID":"82577eb4-f869-4200-b5b4-929920b4272a","Type":"ContainerStarted","Data":"857c056b753cd8bb3e89dfd1760cb677cdac1b23ea8a67a135ab0f6d433e8e00"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.303897 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.319058 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" event={"ID":"0e4e6fc1-bf89-455e-8409-31ba869ffdf1","Type":"ContainerStarted","Data":"0fd1671f275b315b061fb974808d1aa500684a26ad23a92b9cd33ada2302d263"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.320000 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.320146 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.820118067 +0000 UTC m=+162.782295142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.320522 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.321242 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.821224717 +0000 UTC m=+162.783401802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.323079 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" event={"ID":"c8bad892-59d1-45b5-a388-156353675860","Type":"ContainerStarted","Data":"7e15a380c4958922635a3f0ef876a4a29f0f3e0243712bcd54a07d1ffa74f9ea"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.329269 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hrfrv" event={"ID":"689505d7-9623-458a-b60a-e584c405540d","Type":"ContainerStarted","Data":"f3800289fd78618ee70f1f779350e204bfd25f8b8f814718961783ad0cf439b4"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.331179 4625 generic.go:334] "Generic (PLEG): container finished" podID="2d498f8f-b8ca-41f0-96a7-d1c170a2fa15" containerID="91d78e21973b12a46beb8be7656a4ccaaa26111e5aace35d6668dca6ab588326" exitCode=0 Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.332055 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" event={"ID":"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15","Type":"ContainerDied","Data":"91d78e21973b12a46beb8be7656a4ccaaa26111e5aace35d6668dca6ab588326"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.397892 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" event={"ID":"bfa9a143-ca0d-4f32-b9a7-b2acb327bedc","Type":"ContainerStarted","Data":"e425237d6e8054e0925235215e7e95f133d4afbdc344e1c7f585a74dd71ce705"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.435014 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.436580 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:46.936558084 +0000 UTC m=+162.898735159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.467593 4625 generic.go:334] "Generic (PLEG): container finished" podID="9da966a1-56e1-4805-b03d-97b9d2ca467a" containerID="c1767afc3ef11a6bbdc9f86b5291178c56023284c56594148c1a7ef084cbcdca" exitCode=0 Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.467770 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" event={"ID":"9da966a1-56e1-4805-b03d-97b9d2ca467a","Type":"ContainerDied","Data":"c1767afc3ef11a6bbdc9f86b5291178c56023284c56594148c1a7ef084cbcdca"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.477659 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" event={"ID":"47088586-bc86-4ee1-99db-31eb2eb14ffc","Type":"ContainerStarted","Data":"868c11f267ef783dca41da2e585285d52ea07b9eabee8b348738f6df9118547a"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.492847 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" event={"ID":"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c","Type":"ContainerStarted","Data":"3e00d88a012902b8f35506df5a5e2fee506ccdd1da81a51339cce2b35dd70adb"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.521622 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" event={"ID":"f086744a-9c7e-46bc-b05a-cce4599e47aa","Type":"ContainerStarted","Data":"361b008680f6d1fe1b313b7abeae4d1f6a5f4cf43b8f55c58c2c4a3ef7204e07"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.549130 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.549610 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:47.049593797 +0000 UTC m=+163.011770872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.601100 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r4vx5" event={"ID":"78ff7018-4a24-46de-b71b-6a4fb8b8b8ee","Type":"ContainerStarted","Data":"42feba25dc095904de96348b5c35e6a8801a50d1896d6fd782f725255d75f0d7"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.646025 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxltq" event={"ID":"f328b6e0-83f8-4ef2-abed-fe6dbabab077","Type":"ContainerStarted","Data":"b90b24a11dc03f3feba950b2768342435945009d695a3cdcce5680e4c7948631"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.654810 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.655877 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:47.155843086 +0000 UTC m=+163.118020271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.683840 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" event={"ID":"5fcdeff4-0e89-4a89-bb26-53bae5e6a0ef","Type":"ContainerStarted","Data":"d9dc58c8dc3dbbd6b7c09ac08040c7b942550e8b2af0180651bac4611a1a6149"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.698282 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" event={"ID":"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a","Type":"ContainerStarted","Data":"9fcf1098edcb120b931cc5bf8ef4a7a9e79f4f15f00441aa7bbc6b6c293e6d57"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.705059 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" event={"ID":"35ca6406-d63b-41a2-9217-85afd26abacd","Type":"ContainerStarted","Data":"139015b7f82c551637140e0e14e2f3ed5489e8383b098c500e61f4e20f42becf"} Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.712738 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.713057 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.768539 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.770210 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:47.270192776 +0000 UTC m=+163.232369851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.869758 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.871226 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:47.371210023 +0000 UTC m=+163.333387098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.933090 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.960160 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7wbq4" podStartSLOduration=141.960142352 podStartE2EDuration="2m21.960142352s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:46.951671571 +0000 UTC m=+162.913848646" watchObservedRunningTime="2025-12-02 13:46:46.960142352 +0000 UTC m=+162.922319427" Dec 02 13:46:46 crc kubenswrapper[4625]: I1202 13:46:46.974593 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:46 crc kubenswrapper[4625]: E1202 13:46:46.974950 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:47.474938393 +0000 UTC m=+163.437115468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.026261 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" podStartSLOduration=141.026241499 podStartE2EDuration="2m21.026241499s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:47.010883931 +0000 UTC m=+162.973061026" watchObservedRunningTime="2025-12-02 13:46:47.026241499 +0000 UTC m=+162.988418574" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.078069 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:47 crc kubenswrapper[4625]: E1202 13:46:47.078457 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:47.578436668 +0000 UTC m=+163.540613743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.117247 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:47 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:47 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:47 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.117297 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.183249 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:47 crc kubenswrapper[4625]: E1202 13:46:47.196817 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:47.696798917 +0000 UTC m=+163.658975982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.207609 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-dsfpw" podStartSLOduration=142.207586349 podStartE2EDuration="2m22.207586349s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:47.180418811 +0000 UTC m=+163.142595886" watchObservedRunningTime="2025-12-02 13:46:47.207586349 +0000 UTC m=+163.169763424" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.292160 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:47 crc kubenswrapper[4625]: E1202 13:46:47.292697 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:47.792676723 +0000 UTC m=+163.754853798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.432147 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:47 crc kubenswrapper[4625]: E1202 13:46:47.432841 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:47.932826154 +0000 UTC m=+163.895003229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.561123 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:47 crc kubenswrapper[4625]: E1202 13:46:47.561570 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:48.061553315 +0000 UTC m=+164.023730390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.562027 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.562057 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.562112 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.562157 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.664693 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:47 crc kubenswrapper[4625]: E1202 13:46:47.665142 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:48.165130371 +0000 UTC m=+164.127307446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.687494 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wsvpt" podStartSLOduration=141.687463488 podStartE2EDuration="2m21.687463488s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:47.44001685 +0000 UTC m=+163.402193925" watchObservedRunningTime="2025-12-02 13:46:47.687463488 +0000 UTC m=+163.649640563" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.712463 4625 patch_prober.go:28] interesting pod/console-operator-58897d9998-h2nf9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.712557 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" podUID="48f0e27b-d4d8-4118-98af-e6fa04663c27" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.724740 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" event={"ID":"b067825c-8b40-4b11-bf8b-52ebf31ec4ba","Type":"ContainerStarted","Data":"d76021794f44826ac764fd9dbc717a5063189d0f91888482eedc6c8d04ad8331"} Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.742727 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" event={"ID":"c8bad892-59d1-45b5-a388-156353675860","Type":"ContainerStarted","Data":"8d030dcfc52bd3a37718d1d50f8ce82c519aaf5871fc1b47e3aae65252c619df"} Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.768907 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:47 crc kubenswrapper[4625]: E1202 13:46:47.769324 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:48.269289964 +0000 UTC m=+164.231467039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.787353 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hfdtw" event={"ID":"748aaca7-daf1-4bd8-b397-1b3c6eedbc4a","Type":"ContainerStarted","Data":"e847571852d5b0ed07cd22f5a39ba55124ccabc367804760334d6e83d1e00b42"} Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.810408 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" event={"ID":"82577eb4-f869-4200-b5b4-929920b4272a","Type":"ContainerStarted","Data":"e1e5c029a86a3f08e3be2aa58adbcc6ac852e8e94defda6d1a8a66e30b5d1c28"} Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.906405 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:47 crc kubenswrapper[4625]: E1202 13:46:47.908215 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:48.408199941 +0000 UTC m=+164.370377016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.948173 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" event={"ID":"2ec55e1a-74d5-4c19-abde-2b8d8e9f392c","Type":"ContainerStarted","Data":"d94db662360bdd51b496fc703a1d7d33962a17c16dfea272441705f41afdabb5"} Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.948362 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.994366 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" podStartSLOduration=141.994343933 podStartE2EDuration="2m21.994343933s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:47.984514116 +0000 UTC m=+163.946691201" watchObservedRunningTime="2025-12-02 13:46:47.994343933 +0000 UTC m=+163.956521008" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.996727 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" podStartSLOduration=107.996718358 podStartE2EDuration="1m47.996718358s" podCreationTimestamp="2025-12-02 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:47.905265082 +0000 UTC m=+163.867442157" watchObservedRunningTime="2025-12-02 13:46:47.996718358 +0000 UTC m=+163.958895433" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.997394 4625 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9vd9w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.997447 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" podUID="2ec55e1a-74d5-4c19-abde-2b8d8e9f392c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.998328 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" event={"ID":"4065d249-ffb1-406a-9e88-b6b97cf70f2a","Type":"ContainerStarted","Data":"1f06189811257927c555243199630b1596c6d73dcd32d73fefe67438d2d3faaf"} Dec 02 13:46:47 crc kubenswrapper[4625]: I1202 13:46:47.999632 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.007261 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:48 crc kubenswrapper[4625]: E1202 13:46:48.007866 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:48.50784738 +0000 UTC m=+164.470024455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.053744 4625 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hm5k5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.053800 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.064931 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f58fc" event={"ID":"a7490ca6-a9c3-4103-98e8-951179079bf7","Type":"ContainerStarted","Data":"91e5da3ce0cc163a5d60564171b530e7fdc1bcfd25b048fc6dc4580c4a5d59f3"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.107486 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kfxdf" podStartSLOduration=142.107471299 podStartE2EDuration="2m22.107471299s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:48.07404707 +0000 UTC m=+164.036224145" watchObservedRunningTime="2025-12-02 13:46:48.107471299 +0000 UTC m=+164.069648374" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.108495 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:48 crc kubenswrapper[4625]: E1202 13:46:48.109706 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:48.60968661 +0000 UTC m=+164.571863685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.124190 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" event={"ID":"bfa9a143-ca0d-4f32-b9a7-b2acb327bedc","Type":"ContainerStarted","Data":"97c0e13562c8ddb35b50f710022b535f5395d9c7e4f8cb134c1164c5898c48df"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.124432 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:48 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:48 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:48 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.124677 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.207164 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" event={"ID":"7e8d817d-9152-48c4-b7b0-f9df76891753","Type":"ContainerStarted","Data":"05c11bb03619dd1992b1775c35360ca730730783cd1e421189f375d4b8103194"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.208124 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.209897 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:48 crc kubenswrapper[4625]: E1202 13:46:48.210150 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:48.71011884 +0000 UTC m=+164.672295915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.210208 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:48 crc kubenswrapper[4625]: E1202 13:46:48.211356 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:48.711341474 +0000 UTC m=+164.673518559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.212395 4625 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g2crb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.212443 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" podUID="7e8d817d-9152-48c4-b7b0-f9df76891753" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.229230 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" event={"ID":"7c402f2a-3e9f-4eba-a881-a59ae3626f5a","Type":"ContainerStarted","Data":"2662546b0dd107983fe35b16b8230c0047e133406cb21dd3a5a0d12709c20921"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.230168 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.232489 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" event={"ID":"b28f8782-c7d3-4034-b269-90be9cbd9eec","Type":"ContainerStarted","Data":"c7cf18870d9b54d5cf36635eef53b2e9afe3d36747dd02710387c13f42bbf20c"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.232516 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" event={"ID":"b28f8782-c7d3-4034-b269-90be9cbd9eec","Type":"ContainerStarted","Data":"dabf55a3696c79fe0bb1b7365f3f1d7fc74b434aa6b2c50136bb5f1e16c95e59"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.242687 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hrfrv" event={"ID":"689505d7-9623-458a-b60a-e584c405540d","Type":"ContainerStarted","Data":"d2aa3657bfd1e613265de8486b98052686debb717649160ec0af98b5122e6f0c"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.243382 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.244722 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" event={"ID":"47088586-bc86-4ee1-99db-31eb2eb14ffc","Type":"ContainerStarted","Data":"32585ce414d063c8f70cbb625b5b3a0c78aac7568e6ec440eba7ab48067d274d"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.252003 4625 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qcjf2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.252056 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" podUID="7c402f2a-3e9f-4eba-a881-a59ae3626f5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.253039 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" event={"ID":"935b4d1d-2bd9-4594-a23b-d823402ac019","Type":"ContainerStarted","Data":"1090b8dca62374b06664d4bafa60681ddca4706377a4fb0df75392b248084119"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.274177 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" event={"ID":"9da966a1-56e1-4805-b03d-97b9d2ca467a","Type":"ContainerStarted","Data":"f6f21302897dcb03298ff513f3913d2945b7828089f8c6562ca1c88ecb122e5d"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.275025 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.277227 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" event={"ID":"87ed22c8-2d67-46b7-91e9-292293517801","Type":"ContainerStarted","Data":"6c514cecd21b45151ea6cc279270fb2413c2b2fb0f3c4521da6d3cc443f1d9b6"} Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.314039 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:48 crc kubenswrapper[4625]: E1202 13:46:48.315651 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:48.81563214 +0000 UTC m=+164.777809215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.319812 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" podStartSLOduration=142.319791413 podStartE2EDuration="2m22.319791413s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:48.190171368 +0000 UTC m=+164.152348433" watchObservedRunningTime="2025-12-02 13:46:48.319791413 +0000 UTC m=+164.281968488" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.465696 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:48 crc kubenswrapper[4625]: E1202 13:46:48.466202 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:48.966180023 +0000 UTC m=+164.928357098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.471530 4625 patch_prober.go:28] interesting pod/console-operator-58897d9998-h2nf9 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.471629 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" podUID="48f0e27b-d4d8-4118-98af-e6fa04663c27" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.569488 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:48 crc kubenswrapper[4625]: E1202 13:46:48.570040 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:49.070015707 +0000 UTC m=+165.032192782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.576985 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" podStartSLOduration=142.576966506 podStartE2EDuration="2m22.576966506s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:48.322944429 +0000 UTC m=+164.285121524" watchObservedRunningTime="2025-12-02 13:46:48.576966506 +0000 UTC m=+164.539143581" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.673114 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:48 crc kubenswrapper[4625]: E1202 13:46:48.673453 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:49.173441699 +0000 UTC m=+165.135618774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.815995 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:48 crc kubenswrapper[4625]: E1202 13:46:48.817862 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:49.317839486 +0000 UTC m=+165.280016561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.862549 4625 patch_prober.go:28] interesting pod/console-operator-58897d9998-h2nf9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.862603 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" podUID="48f0e27b-d4d8-4118-98af-e6fa04663c27" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.874008 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-f58fc" podStartSLOduration=13.873984573 podStartE2EDuration="13.873984573s" podCreationTimestamp="2025-12-02 13:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:48.579135915 +0000 UTC m=+164.541312990" watchObservedRunningTime="2025-12-02 13:46:48.873984573 +0000 UTC m=+164.836161648" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.920529 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:48 crc kubenswrapper[4625]: E1202 13:46:48.921099 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:49.421084153 +0000 UTC m=+165.383261228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.934666 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n5hqd" podStartSLOduration=142.934644672 podStartE2EDuration="2m22.934644672s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:48.876541212 +0000 UTC m=+164.838718297" watchObservedRunningTime="2025-12-02 13:46:48.934644672 +0000 UTC m=+164.896821747" Dec 02 13:46:48 crc kubenswrapper[4625]: I1202 13:46:48.978523 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" podStartSLOduration=143.978505945 podStartE2EDuration="2m23.978505945s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:48.936223745 +0000 UTC m=+164.898400820" watchObservedRunningTime="2025-12-02 13:46:48.978505945 +0000 UTC m=+164.940683020" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.021391 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:49 crc kubenswrapper[4625]: E1202 13:46:49.021781 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:49.521760331 +0000 UTC m=+165.483937406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.033273 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sjskb" podStartSLOduration=143.033254823 podStartE2EDuration="2m23.033254823s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:48.984748385 +0000 UTC m=+164.946925470" watchObservedRunningTime="2025-12-02 13:46:49.033254823 +0000 UTC m=+164.995431898" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.080123 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" podStartSLOduration=143.080105508 podStartE2EDuration="2m23.080105508s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:49.032871863 +0000 UTC m=+164.995048938" watchObservedRunningTime="2025-12-02 13:46:49.080105508 +0000 UTC m=+165.042282583" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.081062 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hrfrv" podStartSLOduration=14.081054033 podStartE2EDuration="14.081054033s" podCreationTimestamp="2025-12-02 13:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:49.080838077 +0000 UTC m=+165.043015152" watchObservedRunningTime="2025-12-02 13:46:49.081054033 +0000 UTC m=+165.043231108" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.123404 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:49 crc kubenswrapper[4625]: E1202 13:46:49.123914 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:49.623900348 +0000 UTC m=+165.586077423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.146199 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:49 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:49 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:49 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.146273 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.235557 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:49 crc kubenswrapper[4625]: E1202 13:46:49.236168 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:49.736148101 +0000 UTC m=+165.698325176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.256006 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" podStartSLOduration=143.25598951 podStartE2EDuration="2m23.25598951s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:49.250754738 +0000 UTC m=+165.212931813" watchObservedRunningTime="2025-12-02 13:46:49.25598951 +0000 UTC m=+165.218166585" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.257101 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" podStartSLOduration=143.25709477 podStartE2EDuration="2m23.25709477s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:49.206045272 +0000 UTC m=+165.168222347" watchObservedRunningTime="2025-12-02 13:46:49.25709477 +0000 UTC m=+165.219271845" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.271562 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.271825 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.278204 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qcgnq" podStartSLOduration=143.278185153 podStartE2EDuration="2m23.278185153s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:49.272328724 +0000 UTC m=+165.234505799" watchObservedRunningTime="2025-12-02 13:46:49.278185153 +0000 UTC m=+165.240362228" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.337756 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:49 crc kubenswrapper[4625]: E1202 13:46:49.338342 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:49.838296998 +0000 UTC m=+165.800474103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.340926 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9zdhl" event={"ID":"ef408b1b-6bae-454b-b9a2-3dd62ffcacf2","Type":"ContainerStarted","Data":"cf89cfc7802db83ab30c25d886e3a3a331b85e5fa1948c31c21ce048d3e61977"} Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.351436 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" event={"ID":"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15","Type":"ContainerStarted","Data":"9688efc359b8c0a9f7031838150cf75cf94645dbd16588c7dd08738b5b841cda"} Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.365724 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" event={"ID":"b067825c-8b40-4b11-bf8b-52ebf31ec4ba","Type":"ContainerStarted","Data":"e88d1efbcad0df4ee95eea32da728dd14969287f2dc001df9474fbe5494b806d"} Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.365867 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.394616 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" event={"ID":"0e4e6fc1-bf89-455e-8409-31ba869ffdf1","Type":"ContainerStarted","Data":"7003bed0eb1be3ec75542fa58e5874eb921cf64aae825d0a2ce727155fe056c7"} Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.418257 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" event={"ID":"35ca6406-d63b-41a2-9217-85afd26abacd","Type":"ContainerStarted","Data":"f9cfffdafc80187c9741b99075736da8ee125425212f1252d6be6e469ee1d5bc"} Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.418301 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" event={"ID":"35ca6406-d63b-41a2-9217-85afd26abacd","Type":"ContainerStarted","Data":"e05a423a3e93b1ea577a789b621d2c9ac3240426429e76bc2c87d66ee5d97d41"} Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.438884 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:49 crc kubenswrapper[4625]: E1202 13:46:49.440921 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:49.940893758 +0000 UTC m=+165.903070983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.452683 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxltq" event={"ID":"f328b6e0-83f8-4ef2-abed-fe6dbabab077","Type":"ContainerStarted","Data":"6eab72a95191ae8c43a61d5a9c5d26f82750a305f032063988b34c14da440127"} Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.541403 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.541453 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:49 crc kubenswrapper[4625]: E1202 13:46:49.542728 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.042715177 +0000 UTC m=+166.004892252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.552808 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23fa40dc-ba01-4997-bb3f-c9774637dc22-metrics-certs\") pod \"network-metrics-daemon-x94k8\" (UID: \"23fa40dc-ba01-4997-bb3f-c9774637dc22\") " pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.572280 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" event={"ID":"47088586-bc86-4ee1-99db-31eb2eb14ffc","Type":"ContainerStarted","Data":"5cc642c8c3a609149c7cb20a6ea159e50ab5a96a3dd5e1eb43f8cd4a97a5b8ab"} Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.613639 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" event={"ID":"d9ab4834-e296-4113-ad72-c1e6c86b3ee6","Type":"ContainerStarted","Data":"0ae8768883189a553302437ac432c218afa810bfcfa31d14b7134742375bddc5"} Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.625734 4625 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hmnvh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.626004 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" podUID="9da966a1-56e1-4805-b03d-97b9d2ca467a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.626188 4625 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hmnvh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.626275 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" podUID="9da966a1-56e1-4805-b03d-97b9d2ca467a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.640553 4625 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hmnvh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.640613 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" podUID="9da966a1-56e1-4805-b03d-97b9d2ca467a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.640568 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hrfrv" event={"ID":"689505d7-9623-458a-b60a-e584c405540d","Type":"ContainerStarted","Data":"1b3f738eb1f3d6507e9cbf926aa7e1b2a542bad270341b93c767ea9b0df055ff"} Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.641875 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.643423 4625 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9vd9w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.643470 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" podUID="2ec55e1a-74d5-4c19-abde-2b8d8e9f392c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.643560 4625 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g2crb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.643581 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" podUID="7e8d817d-9152-48c4-b7b0-f9df76891753" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 02 13:46:49 crc kubenswrapper[4625]: E1202 13:46:49.643995 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.14397946 +0000 UTC m=+166.106156535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.644727 4625 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hm5k5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.644864 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.651049 4625 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qcjf2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.651111 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" podUID="7c402f2a-3e9f-4eba-a881-a59ae3626f5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.651374 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-h2nf9" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.773905 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.775762 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-x94k8" Dec 02 13:46:49 crc kubenswrapper[4625]: E1202 13:46:49.776415 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.276398073 +0000 UTC m=+166.238575148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.814694 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" podStartSLOduration=143.814673973 podStartE2EDuration="2m23.814673973s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:49.60091818 +0000 UTC m=+165.563095255" watchObservedRunningTime="2025-12-02 13:46:49.814673973 +0000 UTC m=+165.776851048" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.821776 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bnk2h" podStartSLOduration=144.821759536 podStartE2EDuration="2m24.821759536s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:49.812267039 +0000 UTC m=+165.774444114" watchObservedRunningTime="2025-12-02 13:46:49.821759536 +0000 UTC m=+165.783936781" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.880449 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:49 crc kubenswrapper[4625]: E1202 13:46:49.881116 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.381091669 +0000 UTC m=+166.343268744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.926549 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r96rb" podStartSLOduration=143.926513033 podStartE2EDuration="2m23.926513033s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:49.925189217 +0000 UTC m=+165.887366282" watchObservedRunningTime="2025-12-02 13:46:49.926513033 +0000 UTC m=+165.888690108" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.954876 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.955181 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.960563 4625 patch_prober.go:28] interesting pod/console-f9d7485db-pr728 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.960810 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pr728" podUID="15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 02 13:46:49 crc kubenswrapper[4625]: I1202 13:46:49.982594 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:49 crc kubenswrapper[4625]: E1202 13:46:49.983416 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.483382999 +0000 UTC m=+166.445560074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.031283 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-66bnq" podStartSLOduration=144.031266611 podStartE2EDuration="2m24.031266611s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:50.029423402 +0000 UTC m=+165.991600467" watchObservedRunningTime="2025-12-02 13:46:50.031266611 +0000 UTC m=+165.993443686" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.083814 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.084213 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.5841727 +0000 UTC m=+166.546349775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.084548 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.087144 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.58711717 +0000 UTC m=+166.549294425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.101227 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.105783 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:50 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:50 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:50 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.105843 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.211472 4625 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9vd9w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.211737 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" podUID="2ec55e1a-74d5-4c19-abde-2b8d8e9f392c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.211514 4625 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g2crb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.211879 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" podUID="7e8d817d-9152-48c4-b7b0-f9df76891753" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.211580 4625 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g2crb container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.211964 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" podUID="7e8d817d-9152-48c4-b7b0-f9df76891753" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.211822 4625 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9vd9w container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.212019 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" podUID="2ec55e1a-74d5-4c19-abde-2b8d8e9f392c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.212500 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.212788 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.712741287 +0000 UTC m=+166.674918362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.213025 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.213537 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.713511727 +0000 UTC m=+166.675688802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.325045 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.325453 4625 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hm5k5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.325559 4625 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hm5k5 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.326668 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.326932 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.327400 4625 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qcjf2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.327483 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" podUID="7c402f2a-3e9f-4eba-a881-a59ae3626f5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.327683 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.827660001 +0000 UTC m=+166.789837076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.330816 4625 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qcjf2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.330862 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" podUID="7c402f2a-3e9f-4eba-a881-a59ae3626f5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.429168 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.430319 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:50.930281772 +0000 UTC m=+166.892458857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.532031 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.533988 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.033708834 +0000 UTC m=+166.995885909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.634386 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.635364 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.135345837 +0000 UTC m=+167.097522912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.715816 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" event={"ID":"2d498f8f-b8ca-41f0-96a7-d1c170a2fa15","Type":"ContainerStarted","Data":"973b9ba5f35bd4096f8878920c810e18d278c5b7bfb04fb5ee122bc93becbbcd"} Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.736252 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.736542 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.236526469 +0000 UTC m=+167.198703544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.740283 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" event={"ID":"0e4e6fc1-bf89-455e-8409-31ba869ffdf1","Type":"ContainerStarted","Data":"21baefaa830a0d6bf683e4942940bc62773555b8a37cf45753d3ae39dd7604f8"} Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.764985 4625 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qcjf2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.765037 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" podUID="7c402f2a-3e9f-4eba-a881-a59ae3626f5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.765108 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxltq" event={"ID":"f328b6e0-83f8-4ef2-abed-fe6dbabab077","Type":"ContainerStarted","Data":"c24014ed7e352ec6efc29280ca21009c0b6abe52430ce333d6be30a056181825"} Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.766664 4625 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hmnvh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.766695 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" podUID="9da966a1-56e1-4805-b03d-97b9d2ca467a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.766743 4625 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hm5k5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.766811 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.785642 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.812059 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" podStartSLOduration=145.812028602 podStartE2EDuration="2m25.812028602s" podCreationTimestamp="2025-12-02 13:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:50.810018187 +0000 UTC m=+166.772195272" watchObservedRunningTime="2025-12-02 13:46:50.812028602 +0000 UTC m=+166.774205677" Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.841068 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.844989 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.344964708 +0000 UTC m=+167.307141983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:50 crc kubenswrapper[4625]: I1202 13:46:50.942755 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:50 crc kubenswrapper[4625]: E1202 13:46:50.943140 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.443121407 +0000 UTC m=+167.405298482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.044463 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.045992 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.545973314 +0000 UTC m=+167.508150389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.105378 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:51 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:51 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:51 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.105504 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.145790 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.146685 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.646660892 +0000 UTC m=+167.608837987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.146977 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.147404 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.647394451 +0000 UTC m=+167.609571526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.248790 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.249006 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.748965744 +0000 UTC m=+167.711142819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.249481 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.249997 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.74997666 +0000 UTC m=+167.712153735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.437410 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.452127 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:51.952085777 +0000 UTC m=+167.914262852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.558717 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.559089 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.059076746 +0000 UTC m=+168.021253821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.672634 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.673158 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.173132478 +0000 UTC m=+168.135309553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.805196 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.805958 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.305937889 +0000 UTC m=+168.268114964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.912438 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.916669 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.416610919 +0000 UTC m=+168.378787994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:51 crc kubenswrapper[4625]: I1202 13:46:51.943471 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:51 crc kubenswrapper[4625]: E1202 13:46:51.954688 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.454465038 +0000 UTC m=+168.416642133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.049157 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.058106 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.558080665 +0000 UTC m=+168.520257740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.109583 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:52 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:52 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:52 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.110382 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.152577 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.153156 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.65314528 +0000 UTC m=+168.615322355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.256197 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.256931 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.756912372 +0000 UTC m=+168.719089447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.375280 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.375851 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.875837916 +0000 UTC m=+168.838014981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.477124 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.477650 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.977631743 +0000 UTC m=+168.939808818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.478035 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.478386 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:52.978377674 +0000 UTC m=+168.940554749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.488892 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.488954 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.572532 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.573621 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.580726 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.580831 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.080813149 +0000 UTC m=+169.042990224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.581285 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.582076 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.082067133 +0000 UTC m=+169.044244208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.629900 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.669459 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bfk9k" podStartSLOduration=146.669437729 podStartE2EDuration="2m26.669437729s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:51.058981727 +0000 UTC m=+167.021158802" watchObservedRunningTime="2025-12-02 13:46:52.669437729 +0000 UTC m=+168.631614804" Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.669982 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-x94k8"] Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.683827 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.685516 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.185491126 +0000 UTC m=+169.147668201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.792557 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.793416 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.29340079 +0000 UTC m=+169.255577865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.901345 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.901727 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.401703915 +0000 UTC m=+169.363881010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.902274 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:52 crc kubenswrapper[4625]: E1202 13:46:52.902909 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.402896457 +0000 UTC m=+169.365073532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.911635 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-x94k8" event={"ID":"23fa40dc-ba01-4997-bb3f-c9774637dc22","Type":"ContainerStarted","Data":"1f8b04e619947d8e757874fe3e54bfc67be0ba03e8c61e32eb07b236539f791d"} Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.915041 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxltq" event={"ID":"f328b6e0-83f8-4ef2-abed-fe6dbabab077","Type":"ContainerStarted","Data":"3c0ec3ba4a9c337533375acafa4437e1016ef9a3ed502b1f076f79d50a7ae2b9"} Dec 02 13:46:52 crc kubenswrapper[4625]: I1202 13:46:52.933808 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rqhcv" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.011453 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:53 crc kubenswrapper[4625]: E1202 13:46:53.012913 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.512890699 +0000 UTC m=+169.475067774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.064373 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.065674 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.085803 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.086635 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.113480 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:53 crc kubenswrapper[4625]: E1202 13:46:53.114094 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.61408081 +0000 UTC m=+169.576257885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.116281 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:53 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:53 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:53 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.116651 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.152859 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.214718 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.215169 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86b3568a-5997-46ee-99b9-89fb517328cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"86b3568a-5997-46ee-99b9-89fb517328cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.215209 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86b3568a-5997-46ee-99b9-89fb517328cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"86b3568a-5997-46ee-99b9-89fb517328cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:46:53 crc kubenswrapper[4625]: E1202 13:46:53.215410 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.715376674 +0000 UTC m=+169.677553749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.316202 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86b3568a-5997-46ee-99b9-89fb517328cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"86b3568a-5997-46ee-99b9-89fb517328cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.316348 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.316385 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86b3568a-5997-46ee-99b9-89fb517328cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"86b3568a-5997-46ee-99b9-89fb517328cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.316788 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86b3568a-5997-46ee-99b9-89fb517328cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"86b3568a-5997-46ee-99b9-89fb517328cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:46:53 crc kubenswrapper[4625]: E1202 13:46:53.317073 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.81706024 +0000 UTC m=+169.779237315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.417262 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:53 crc kubenswrapper[4625]: E1202 13:46:53.417636 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:53.917620834 +0000 UTC m=+169.879797909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.434190 4625 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.472153 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86b3568a-5997-46ee-99b9-89fb517328cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"86b3568a-5997-46ee-99b9-89fb517328cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.519323 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:53 crc kubenswrapper[4625]: E1202 13:46:53.520150 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:54.020134721 +0000 UTC m=+169.982311796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.559270 4625 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-02T13:46:53.434449032Z","Handler":null,"Name":""} Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.627088 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:53 crc kubenswrapper[4625]: E1202 13:46:53.627890 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:54.127859581 +0000 UTC m=+170.090036656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.628032 4625 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hmnvh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.628162 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" podUID="9da966a1-56e1-4805-b03d-97b9d2ca467a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.629933 4625 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hmnvh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.630060 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" podUID="9da966a1-56e1-4805-b03d-97b9d2ca467a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.711744 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.729185 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:53 crc kubenswrapper[4625]: E1202 13:46:53.729766 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:54.229752032 +0000 UTC m=+170.191929107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.830339 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:53 crc kubenswrapper[4625]: E1202 13:46:53.830878 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:54.330858911 +0000 UTC m=+170.293035986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:53 crc kubenswrapper[4625]: I1202 13:46:53.932322 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:53 crc kubenswrapper[4625]: E1202 13:46:53.934068 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:54.434052276 +0000 UTC m=+170.396229351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.034178 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:54 crc kubenswrapper[4625]: E1202 13:46:54.034655 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-02 13:46:54.534636011 +0000 UTC m=+170.496813086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.037350 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-x94k8" event={"ID":"23fa40dc-ba01-4997-bb3f-c9774637dc22","Type":"ContainerStarted","Data":"ee149b065745d91b59432908b6fb1267328e982c6b58dbcda82f11e1bc33d034"} Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.107432 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxltq" event={"ID":"f328b6e0-83f8-4ef2-abed-fe6dbabab077","Type":"ContainerStarted","Data":"759b0ac1e219e08536f9852ffff0aa3822b306f969caf9c05a5ee8d294afac02"} Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.108581 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:54 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:54 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:54 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.108631 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.135628 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:54 crc kubenswrapper[4625]: E1202 13:46:54.138349 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-02 13:46:54.638333641 +0000 UTC m=+170.600510716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sc4p7" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.144255 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.145029 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.152992 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.153410 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.161505 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dxltq" podStartSLOduration=19.161477081 podStartE2EDuration="19.161477081s" podCreationTimestamp="2025-12-02 13:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:54.153553145 +0000 UTC m=+170.115730220" watchObservedRunningTime="2025-12-02 13:46:54.161477081 +0000 UTC m=+170.123654156" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.167432 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.245419 4625 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.245462 4625 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.317269 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.317598 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeb796e-22f3-4ec9-ac27-f966a4078864-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ebeb796e-22f3-4ec9-ac27-f966a4078864\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.317652 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeb796e-22f3-4ec9-ac27-f966a4078864-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ebeb796e-22f3-4ec9-ac27-f966a4078864\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.405975 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.418725 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeb796e-22f3-4ec9-ac27-f966a4078864-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ebeb796e-22f3-4ec9-ac27-f966a4078864\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.418804 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.418835 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeb796e-22f3-4ec9-ac27-f966a4078864-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ebeb796e-22f3-4ec9-ac27-f966a4078864\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.419131 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeb796e-22f3-4ec9-ac27-f966a4078864-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ebeb796e-22f3-4ec9-ac27-f966a4078864\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.523912 4625 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.523967 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.528917 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeb796e-22f3-4ec9-ac27-f966a4078864-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ebeb796e-22f3-4ec9-ac27-f966a4078864\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.772649 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:46:54 crc kubenswrapper[4625]: I1202 13:46:54.880199 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.115726 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:55 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:55 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:55 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.115789 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.252732 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r84nn"] Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.253903 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.354838 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.370848 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r84nn"] Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.465282 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-catalog-content\") pod \"community-operators-r84nn\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.465707 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.466125 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-utilities\") pod \"community-operators-r84nn\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.466207 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npdlf\" (UniqueName: \"kubernetes.io/projected/039b4452-411a-43c5-9823-860c079e5de3-kube-api-access-npdlf\") pod \"community-operators-r84nn\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.499187 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ds7zw"] Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.577006 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-catalog-content\") pod \"community-operators-r84nn\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.577103 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-utilities\") pod \"community-operators-r84nn\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.577162 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npdlf\" (UniqueName: \"kubernetes.io/projected/039b4452-411a-43c5-9823-860c079e5de3-kube-api-access-npdlf\") pod \"community-operators-r84nn\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.577903 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-catalog-content\") pod \"community-operators-r84nn\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.578027 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-utilities\") pod \"community-operators-r84nn\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.580183 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.616735 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.693846 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-utilities\") pod \"certified-operators-ds7zw\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.694028 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-catalog-content\") pod \"certified-operators-ds7zw\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.694096 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rgn\" (UniqueName: \"kubernetes.io/projected/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-kube-api-access-99rgn\") pod \"certified-operators-ds7zw\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.716333 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npdlf\" (UniqueName: \"kubernetes.io/projected/039b4452-411a-43c5-9823-860c079e5de3-kube-api-access-npdlf\") pod \"community-operators-r84nn\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.719605 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.736901 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sc4p7\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.737032 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hmnvh" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.770416 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5pmg6"] Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.779129 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.802030 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-catalog-content\") pod \"community-operators-5pmg6\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.802091 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-utilities\") pod \"certified-operators-ds7zw\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.802134 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmlvp\" (UniqueName: \"kubernetes.io/projected/36ff365b-030a-4ee4-9819-c5c41464213d-kube-api-access-hmlvp\") pod \"community-operators-5pmg6\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.802165 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-utilities\") pod \"community-operators-5pmg6\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.802189 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-catalog-content\") pod \"certified-operators-ds7zw\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.802210 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99rgn\" (UniqueName: \"kubernetes.io/projected/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-kube-api-access-99rgn\") pod \"certified-operators-ds7zw\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.804141 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-utilities\") pod \"certified-operators-ds7zw\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.804702 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-catalog-content\") pod \"certified-operators-ds7zw\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.857873 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ws49j"] Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.859208 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.878416 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99rgn\" (UniqueName: \"kubernetes.io/projected/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-kube-api-access-99rgn\") pod \"certified-operators-ds7zw\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.904361 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-catalog-content\") pod \"community-operators-5pmg6\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.904418 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmlvp\" (UniqueName: \"kubernetes.io/projected/36ff365b-030a-4ee4-9819-c5c41464213d-kube-api-access-hmlvp\") pod \"community-operators-5pmg6\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.904445 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-utilities\") pod \"certified-operators-ws49j\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.904463 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-utilities\") pod \"community-operators-5pmg6\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.904481 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-catalog-content\") pod \"certified-operators-ws49j\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.904525 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tc2\" (UniqueName: \"kubernetes.io/projected/e24375bb-53a2-4ee7-992e-4d57c2293536-kube-api-access-r4tc2\") pod \"certified-operators-ws49j\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.904989 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-catalog-content\") pod \"community-operators-5pmg6\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.905537 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-utilities\") pod \"community-operators-5pmg6\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.909651 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.917598 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5pmg6"] Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.934713 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ds7zw"] Dec 02 13:46:55 crc kubenswrapper[4625]: I1202 13:46:55.941201 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.027080 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-utilities\") pod \"certified-operators-ws49j\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.027119 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-catalog-content\") pod \"certified-operators-ws49j\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.027144 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4tc2\" (UniqueName: \"kubernetes.io/projected/e24375bb-53a2-4ee7-992e-4d57c2293536-kube-api-access-r4tc2\") pod \"certified-operators-ws49j\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.055569 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-utilities\") pod \"certified-operators-ws49j\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.078585 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ws49j"] Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.080081 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-catalog-content\") pod \"certified-operators-ws49j\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.081324 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmlvp\" (UniqueName: \"kubernetes.io/projected/36ff365b-030a-4ee4-9819-c5c41464213d-kube-api-access-hmlvp\") pod \"community-operators-5pmg6\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.120544 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:56 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:56 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:56 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.120640 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.175557 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4tc2\" (UniqueName: \"kubernetes.io/projected/e24375bb-53a2-4ee7-992e-4d57c2293536-kube-api-access-r4tc2\") pod \"certified-operators-ws49j\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.184096 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.196991 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.484433 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"86b3568a-5997-46ee-99b9-89fb517328cc","Type":"ContainerStarted","Data":"c9ad457d2a71cbafcb908b7702d04730f6a9d8437b342b7edbf6f8a2b308966d"} Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.590912 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-x94k8" event={"ID":"23fa40dc-ba01-4997-bb3f-c9774637dc22","Type":"ContainerStarted","Data":"e0cf9510d7ac5e0f2722f87a85810e4e3028b47e5402180ef7245fbdd1dea287"} Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.732345 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-x94k8" podStartSLOduration=150.732289507 podStartE2EDuration="2m30.732289507s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:56.677421545 +0000 UTC m=+172.639598650" watchObservedRunningTime="2025-12-02 13:46:56.732289507 +0000 UTC m=+172.694466582" Dec 02 13:46:56 crc kubenswrapper[4625]: I1202 13:46:56.743037 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 02 13:46:56 crc kubenswrapper[4625]: W1202 13:46:56.888739 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podebeb796e_22f3_4ec9_ac27_f966a4078864.slice/crio-5b0af563996d130d28514d056a630be3d38fabfd87ab3206a0f1e089db26de1a WatchSource:0}: Error finding container 5b0af563996d130d28514d056a630be3d38fabfd87ab3206a0f1e089db26de1a: Status 404 returned error can't find the container with id 5b0af563996d130d28514d056a630be3d38fabfd87ab3206a0f1e089db26de1a Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.219890 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:57 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:57 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:57 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.220093 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.309913 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-77pll"] Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.311012 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.330490 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-77pll"] Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.336561 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.440275 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-catalog-content\") pod \"redhat-marketplace-77pll\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.440803 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-utilities\") pod \"redhat-marketplace-77pll\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.440844 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzw5\" (UniqueName: \"kubernetes.io/projected/a736316a-06cf-4768-bb70-f5c9ed61de8f-kube-api-access-lqzw5\") pod \"redhat-marketplace-77pll\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.524207 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.524274 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.525774 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.525851 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.545214 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-catalog-content\") pod \"redhat-marketplace-77pll\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.545292 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-utilities\") pod \"redhat-marketplace-77pll\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.545356 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqzw5\" (UniqueName: \"kubernetes.io/projected/a736316a-06cf-4768-bb70-f5c9ed61de8f-kube-api-access-lqzw5\") pod \"redhat-marketplace-77pll\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.546086 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-catalog-content\") pod \"redhat-marketplace-77pll\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.546390 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-utilities\") pod \"redhat-marketplace-77pll\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.585097 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r84nn"] Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.612740 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqzw5\" (UniqueName: \"kubernetes.io/projected/a736316a-06cf-4768-bb70-f5c9ed61de8f-kube-api-access-lqzw5\") pod \"redhat-marketplace-77pll\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.683750 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.734756 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lb2rr"] Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.736362 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.763028 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-catalog-content\") pod \"redhat-marketplace-lb2rr\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.763113 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-utilities\") pod \"redhat-marketplace-lb2rr\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.763139 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2dgk\" (UniqueName: \"kubernetes.io/projected/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-kube-api-access-d2dgk\") pod \"redhat-marketplace-lb2rr\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.766546 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lb2rr"] Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.775004 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"86b3568a-5997-46ee-99b9-89fb517328cc","Type":"ContainerStarted","Data":"58fbad4d70f3022701f3f77bb8d42551cd9f30200fb72edd3ef48bbbca334d06"} Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.777259 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ebeb796e-22f3-4ec9-ac27-f966a4078864","Type":"ContainerStarted","Data":"5b0af563996d130d28514d056a630be3d38fabfd87ab3206a0f1e089db26de1a"} Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.870746 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-utilities\") pod \"redhat-marketplace-lb2rr\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.870827 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2dgk\" (UniqueName: \"kubernetes.io/projected/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-kube-api-access-d2dgk\") pod \"redhat-marketplace-lb2rr\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.870910 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-catalog-content\") pod \"redhat-marketplace-lb2rr\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.871510 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-catalog-content\") pod \"redhat-marketplace-lb2rr\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.873271 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-utilities\") pod \"redhat-marketplace-lb2rr\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.945456 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2dgk\" (UniqueName: \"kubernetes.io/projected/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-kube-api-access-d2dgk\") pod \"redhat-marketplace-lb2rr\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:57 crc kubenswrapper[4625]: I1202 13:46:57.985601 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.068632 4625 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tjbfd container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]log ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]etcd ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/generic-apiserver-start-informers ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/max-in-flight-filter ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 02 13:46:58 crc kubenswrapper[4625]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 02 13:46:58 crc kubenswrapper[4625]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/project.openshift.io-projectcache ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/openshift.io-startinformers ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 02 13:46:58 crc kubenswrapper[4625]: livez check failed Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.068753 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" podUID="2d498f8f-b8ca-41f0-96a7-d1c170a2fa15" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.079506 4625 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tjbfd container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]log ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]etcd ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/generic-apiserver-start-informers ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/max-in-flight-filter ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 02 13:46:58 crc kubenswrapper[4625]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 02 13:46:58 crc kubenswrapper[4625]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/project.openshift.io-projectcache ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/openshift.io-startinformers ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 02 13:46:58 crc kubenswrapper[4625]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 02 13:46:58 crc kubenswrapper[4625]: livez check failed Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.079624 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" podUID="2d498f8f-b8ca-41f0-96a7-d1c170a2fa15" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.120538 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:58 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:58 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:58 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.120605 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.170393 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.170372951 podStartE2EDuration="5.170372951s" podCreationTimestamp="2025-12-02 13:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:46:57.927282701 +0000 UTC m=+173.889459796" watchObservedRunningTime="2025-12-02 13:46:58.170372951 +0000 UTC m=+174.132550026" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.191975 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5pmg6"] Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.331085 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zwb28"] Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.332201 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.346123 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.376870 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hrfrv" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.381738 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwb28"] Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.388988 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77bq8\" (UniqueName: \"kubernetes.io/projected/4d302da4-c96b-4efd-be3e-104812b4adfa-kube-api-access-77bq8\") pod \"redhat-operators-zwb28\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.389070 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-catalog-content\") pod \"redhat-operators-zwb28\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.389114 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-utilities\") pod \"redhat-operators-zwb28\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.490586 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-catalog-content\") pod \"redhat-operators-zwb28\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.490975 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-utilities\") pod \"redhat-operators-zwb28\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.491017 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77bq8\" (UniqueName: \"kubernetes.io/projected/4d302da4-c96b-4efd-be3e-104812b4adfa-kube-api-access-77bq8\") pod \"redhat-operators-zwb28\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.492082 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-catalog-content\") pod \"redhat-operators-zwb28\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.492489 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-utilities\") pod \"redhat-operators-zwb28\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.522094 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ws49j"] Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.564167 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ds7zw"] Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.600452 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77bq8\" (UniqueName: \"kubernetes.io/projected/4d302da4-c96b-4efd-be3e-104812b4adfa-kube-api-access-77bq8\") pod \"redhat-operators-zwb28\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.682411 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.698168 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cnz8w"] Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.705282 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.731101 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-utilities\") pod \"redhat-operators-cnz8w\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.732609 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-catalog-content\") pod \"redhat-operators-cnz8w\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.732656 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ndbc\" (UniqueName: \"kubernetes.io/projected/b3dd5657-6642-43f3-922f-37dea47fe07a-kube-api-access-8ndbc\") pod \"redhat-operators-cnz8w\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.733334 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnz8w"] Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.789226 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sc4p7"] Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.803793 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pmg6" event={"ID":"36ff365b-030a-4ee4-9819-c5c41464213d","Type":"ContainerStarted","Data":"173deed824c7d1998f912deb3c1673586ab1dfa4e9f769dc7366d0662deb459c"} Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.803855 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pmg6" event={"ID":"36ff365b-030a-4ee4-9819-c5c41464213d","Type":"ContainerStarted","Data":"c8d68bb9863638a7fc24b28868886233b79a7b45e8f2b376a93ebbb35122e632"} Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.820539 4625 generic.go:334] "Generic (PLEG): container finished" podID="ebeb796e-22f3-4ec9-ac27-f966a4078864" containerID="3b23064e9a3665d415b11228c850125bc4e73e9901a6fc1cff8490997ef406ed" exitCode=0 Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.820781 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ebeb796e-22f3-4ec9-ac27-f966a4078864","Type":"ContainerDied","Data":"3b23064e9a3665d415b11228c850125bc4e73e9901a6fc1cff8490997ef406ed"} Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.843211 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-utilities\") pod \"redhat-operators-cnz8w\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.843296 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-catalog-content\") pod \"redhat-operators-cnz8w\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.843343 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ndbc\" (UniqueName: \"kubernetes.io/projected/b3dd5657-6642-43f3-922f-37dea47fe07a-kube-api-access-8ndbc\") pod \"redhat-operators-cnz8w\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.931237 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-catalog-content\") pod \"redhat-operators-cnz8w\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:58 crc kubenswrapper[4625]: I1202 13:46:58.955201 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-utilities\") pod \"redhat-operators-cnz8w\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.007849 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ndbc\" (UniqueName: \"kubernetes.io/projected/b3dd5657-6642-43f3-922f-37dea47fe07a-kube-api-access-8ndbc\") pod \"redhat-operators-cnz8w\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.008181 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws49j" event={"ID":"e24375bb-53a2-4ee7-992e-4d57c2293536","Type":"ContainerStarted","Data":"ba41552936fd11b0bc3aab1eb915bf35e8f780f0f4fa48df35a693d2fa0c1c60"} Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.025036 4625 generic.go:334] "Generic (PLEG): container finished" podID="039b4452-411a-43c5-9823-860c079e5de3" containerID="92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5" exitCode=0 Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.025143 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r84nn" event={"ID":"039b4452-411a-43c5-9823-860c079e5de3","Type":"ContainerDied","Data":"92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5"} Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.025190 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r84nn" event={"ID":"039b4452-411a-43c5-9823-860c079e5de3","Type":"ContainerStarted","Data":"bab18a710c725db189e22c94298d400fae2ca585fd3a21b21286928031fa796c"} Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.032401 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds7zw" event={"ID":"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7","Type":"ContainerStarted","Data":"38d40aa46fc697def0883acfdb0052cf016cd8084bf994e0a0f411fefb00cce6"} Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.033101 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.043178 4625 generic.go:334] "Generic (PLEG): container finished" podID="c8bad892-59d1-45b5-a388-156353675860" containerID="8d030dcfc52bd3a37718d1d50f8ce82c519aaf5871fc1b47e3aae65252c619df" exitCode=0 Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.043353 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" event={"ID":"c8bad892-59d1-45b5-a388-156353675860","Type":"ContainerDied","Data":"8d030dcfc52bd3a37718d1d50f8ce82c519aaf5871fc1b47e3aae65252c619df"} Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.060259 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.120172 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:46:59 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:46:59 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:46:59 crc kubenswrapper[4625]: healthz check failed Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.120798 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.684636 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-77pll"] Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.731191 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lb2rr"] Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.953562 4625 patch_prober.go:28] interesting pod/console-f9d7485db-pr728 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.954118 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pr728" podUID="15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 02 13:46:59 crc kubenswrapper[4625]: I1202 13:46:59.963945 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwb28"] Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.117586 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:00 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:00 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:00 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.117646 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.126657 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb2rr" event={"ID":"9ad29b6a-7f18-4ed4-9f10-25f93fecb421","Type":"ContainerStarted","Data":"b0430d5eaad15a860a9c6ba991dacd754c9813df4e269154996f4587eea12aae"} Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.137635 4625 generic.go:334] "Generic (PLEG): container finished" podID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerID="4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef" exitCode=0 Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.140087 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws49j" event={"ID":"e24375bb-53a2-4ee7-992e-4d57c2293536","Type":"ContainerDied","Data":"4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef"} Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.156019 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwb28" event={"ID":"4d302da4-c96b-4efd-be3e-104812b4adfa","Type":"ContainerStarted","Data":"49fd83f8636d85002a58e5d522732fe753bac986bf4ab04027ca82ddcd70693c"} Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.205761 4625 generic.go:334] "Generic (PLEG): container finished" podID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerID="bc4dfc67d8976a947041980b6e619a956e649951911e270dbe61a91ef7d03eb2" exitCode=0 Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.205871 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds7zw" event={"ID":"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7","Type":"ContainerDied","Data":"bc4dfc67d8976a947041980b6e619a956e649951911e270dbe61a91ef7d03eb2"} Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.218243 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77pll" event={"ID":"a736316a-06cf-4768-bb70-f5c9ed61de8f","Type":"ContainerStarted","Data":"9078691d8799800387b35fb8fa8d09cf8c2cfb9be263752c3d61060b764f85d6"} Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.222060 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.240350 4625 generic.go:334] "Generic (PLEG): container finished" podID="86b3568a-5997-46ee-99b9-89fb517328cc" containerID="58fbad4d70f3022701f3f77bb8d42551cd9f30200fb72edd3ef48bbbca334d06" exitCode=0 Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.240463 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"86b3568a-5997-46ee-99b9-89fb517328cc","Type":"ContainerDied","Data":"58fbad4d70f3022701f3f77bb8d42551cd9f30200fb72edd3ef48bbbca334d06"} Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.246402 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" event={"ID":"c87d97fb-8391-4f0f-8b3d-a404721de262","Type":"ContainerStarted","Data":"5159b10f6b186f7766c36a0d7d1ed189b9bce04e91d82ca3cfad8dbb013d2db4"} Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.246452 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" event={"ID":"c87d97fb-8391-4f0f-8b3d-a404721de262","Type":"ContainerStarted","Data":"91968f780d930fe60b855a2569264c6b072b4fb2d225c09d665488692898d94f"} Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.247090 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.251396 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnz8w"] Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.252987 4625 generic.go:334] "Generic (PLEG): container finished" podID="36ff365b-030a-4ee4-9819-c5c41464213d" containerID="173deed824c7d1998f912deb3c1673586ab1dfa4e9f769dc7366d0662deb459c" exitCode=0 Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.254461 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pmg6" event={"ID":"36ff365b-030a-4ee4-9819-c5c41464213d","Type":"ContainerDied","Data":"173deed824c7d1998f912deb3c1673586ab1dfa4e9f769dc7366d0662deb459c"} Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.281057 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.281272 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:47:00 crc kubenswrapper[4625]: I1202 13:47:00.454370 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" podStartSLOduration=154.454347335 podStartE2EDuration="2m34.454347335s" podCreationTimestamp="2025-12-02 13:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:47:00.451372084 +0000 UTC m=+176.413549179" watchObservedRunningTime="2025-12-02 13:47:00.454347335 +0000 UTC m=+176.416524410" Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.320108 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:01 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:01 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:01 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.320474 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.411800 4625 generic.go:334] "Generic (PLEG): container finished" podID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerID="f2713deb56806c1da2b0833f58ff0daaa107bdd8d983ad95a484383d48319bce" exitCode=0 Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.411913 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwb28" event={"ID":"4d302da4-c96b-4efd-be3e-104812b4adfa","Type":"ContainerDied","Data":"f2713deb56806c1da2b0833f58ff0daaa107bdd8d983ad95a484383d48319bce"} Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.416387 4625 generic.go:334] "Generic (PLEG): container finished" podID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerID="fcfb8bbc724ce9d5883a6947b3907c2b8535c88216c311587d2e572ede040d1b" exitCode=0 Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.416458 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77pll" event={"ID":"a736316a-06cf-4768-bb70-f5c9ed61de8f","Type":"ContainerDied","Data":"fcfb8bbc724ce9d5883a6947b3907c2b8535c88216c311587d2e572ede040d1b"} Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.433086 4625 generic.go:334] "Generic (PLEG): container finished" podID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerID="b5578198fc6bc265ee6b7509bf70811817e75fee06dc988842362c1a9432e5f9" exitCode=0 Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.433455 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb2rr" event={"ID":"9ad29b6a-7f18-4ed4-9f10-25f93fecb421","Type":"ContainerDied","Data":"b5578198fc6bc265ee6b7509bf70811817e75fee06dc988842362c1a9432e5f9"} Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.437767 4625 generic.go:334] "Generic (PLEG): container finished" podID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerID="12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e" exitCode=0 Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.439167 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnz8w" event={"ID":"b3dd5657-6642-43f3-922f-37dea47fe07a","Type":"ContainerDied","Data":"12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e"} Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.439204 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnz8w" event={"ID":"b3dd5657-6642-43f3-922f-37dea47fe07a","Type":"ContainerStarted","Data":"2629cfac8c067738f03e9db486e8af88df903505b7ca49a579b7b23271dedfe5"} Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.895303 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.936234 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bssbg\" (UniqueName: \"kubernetes.io/projected/c8bad892-59d1-45b5-a388-156353675860-kube-api-access-bssbg\") pod \"c8bad892-59d1-45b5-a388-156353675860\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.936291 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8bad892-59d1-45b5-a388-156353675860-config-volume\") pod \"c8bad892-59d1-45b5-a388-156353675860\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.936331 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8bad892-59d1-45b5-a388-156353675860-secret-volume\") pod \"c8bad892-59d1-45b5-a388-156353675860\" (UID: \"c8bad892-59d1-45b5-a388-156353675860\") " Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.939528 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8bad892-59d1-45b5-a388-156353675860-config-volume" (OuterVolumeSpecName: "config-volume") pod "c8bad892-59d1-45b5-a388-156353675860" (UID: "c8bad892-59d1-45b5-a388-156353675860"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.948432 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8bad892-59d1-45b5-a388-156353675860-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c8bad892-59d1-45b5-a388-156353675860" (UID: "c8bad892-59d1-45b5-a388-156353675860"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:47:01 crc kubenswrapper[4625]: I1202 13:47:01.949115 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8bad892-59d1-45b5-a388-156353675860-kube-api-access-bssbg" (OuterVolumeSpecName: "kube-api-access-bssbg") pod "c8bad892-59d1-45b5-a388-156353675860" (UID: "c8bad892-59d1-45b5-a388-156353675860"). InnerVolumeSpecName "kube-api-access-bssbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.022411 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.037295 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeb796e-22f3-4ec9-ac27-f966a4078864-kubelet-dir\") pod \"ebeb796e-22f3-4ec9-ac27-f966a4078864\" (UID: \"ebeb796e-22f3-4ec9-ac27-f966a4078864\") " Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.037393 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeb796e-22f3-4ec9-ac27-f966a4078864-kube-api-access\") pod \"ebeb796e-22f3-4ec9-ac27-f966a4078864\" (UID: \"ebeb796e-22f3-4ec9-ac27-f966a4078864\") " Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.037674 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bssbg\" (UniqueName: \"kubernetes.io/projected/c8bad892-59d1-45b5-a388-156353675860-kube-api-access-bssbg\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.037703 4625 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8bad892-59d1-45b5-a388-156353675860-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.037713 4625 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8bad892-59d1-45b5-a388-156353675860-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.038082 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebeb796e-22f3-4ec9-ac27-f966a4078864-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ebeb796e-22f3-4ec9-ac27-f966a4078864" (UID: "ebeb796e-22f3-4ec9-ac27-f966a4078864"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.041449 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebeb796e-22f3-4ec9-ac27-f966a4078864-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ebeb796e-22f3-4ec9-ac27-f966a4078864" (UID: "ebeb796e-22f3-4ec9-ac27-f966a4078864"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.052421 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.147570 4625 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebeb796e-22f3-4ec9-ac27-f966a4078864-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.147608 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ebeb796e-22f3-4ec9-ac27-f966a4078864-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.154062 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:02 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:02 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:02 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.154116 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.298020 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86b3568a-5997-46ee-99b9-89fb517328cc-kube-api-access\") pod \"86b3568a-5997-46ee-99b9-89fb517328cc\" (UID: \"86b3568a-5997-46ee-99b9-89fb517328cc\") " Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.301931 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86b3568a-5997-46ee-99b9-89fb517328cc-kubelet-dir\") pod \"86b3568a-5997-46ee-99b9-89fb517328cc\" (UID: \"86b3568a-5997-46ee-99b9-89fb517328cc\") " Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.302148 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86b3568a-5997-46ee-99b9-89fb517328cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "86b3568a-5997-46ee-99b9-89fb517328cc" (UID: "86b3568a-5997-46ee-99b9-89fb517328cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.307228 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b3568a-5997-46ee-99b9-89fb517328cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "86b3568a-5997-46ee-99b9-89fb517328cc" (UID: "86b3568a-5997-46ee-99b9-89fb517328cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.307563 4625 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86b3568a-5997-46ee-99b9-89fb517328cc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.307619 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86b3568a-5997-46ee-99b9-89fb517328cc-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.478357 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.485678 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"86b3568a-5997-46ee-99b9-89fb517328cc","Type":"ContainerDied","Data":"c9ad457d2a71cbafcb908b7702d04730f6a9d8437b342b7edbf6f8a2b308966d"} Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.485854 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9ad457d2a71cbafcb908b7702d04730f6a9d8437b342b7edbf6f8a2b308966d" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.495148 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.498918 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tjbfd" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.499427 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.499466 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ebeb796e-22f3-4ec9-ac27-f966a4078864","Type":"ContainerDied","Data":"5b0af563996d130d28514d056a630be3d38fabfd87ab3206a0f1e089db26de1a"} Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.499509 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b0af563996d130d28514d056a630be3d38fabfd87ab3206a0f1e089db26de1a" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.552375 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.552418 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44" event={"ID":"c8bad892-59d1-45b5-a388-156353675860","Type":"ContainerDied","Data":"7e15a380c4958922635a3f0ef876a4a29f0f3e0243712bcd54a07d1ffa74f9ea"} Dec 02 13:47:02 crc kubenswrapper[4625]: I1202 13:47:02.552449 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e15a380c4958922635a3f0ef876a4a29f0f3e0243712bcd54a07d1ffa74f9ea" Dec 02 13:47:03 crc kubenswrapper[4625]: I1202 13:47:03.103990 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:03 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:03 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:03 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:03 crc kubenswrapper[4625]: I1202 13:47:03.104050 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:04 crc kubenswrapper[4625]: I1202 13:47:04.109969 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:04 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:04 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:04 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:04 crc kubenswrapper[4625]: I1202 13:47:04.110032 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:05 crc kubenswrapper[4625]: I1202 13:47:05.104895 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:05 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:05 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:05 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:05 crc kubenswrapper[4625]: I1202 13:47:05.105197 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:06 crc kubenswrapper[4625]: I1202 13:47:06.164811 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:06 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:06 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:06 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:06 crc kubenswrapper[4625]: I1202 13:47:06.164923 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.104373 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:07 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:07 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:07 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.104446 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.522398 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.522480 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.522538 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.522966 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.523050 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.524406 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"0dc7b6a8a6cf9d738fa028fb17aa963e4e95235ea0dce8d557b9f2d4005387b2"} pod="openshift-console/downloads-7954f5f757-5sq66" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.524529 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" containerID="cri-o://0dc7b6a8a6cf9d738fa028fb17aa963e4e95235ea0dce8d557b9f2d4005387b2" gracePeriod=2 Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.524942 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:07 crc kubenswrapper[4625]: I1202 13:47:07.524973 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:08 crc kubenswrapper[4625]: I1202 13:47:08.126110 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:08 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:08 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:08 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:08 crc kubenswrapper[4625]: I1202 13:47:08.127735 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:08 crc kubenswrapper[4625]: I1202 13:47:08.896578 4625 generic.go:334] "Generic (PLEG): container finished" podID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerID="0dc7b6a8a6cf9d738fa028fb17aa963e4e95235ea0dce8d557b9f2d4005387b2" exitCode=0 Dec 02 13:47:08 crc kubenswrapper[4625]: I1202 13:47:08.922368 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5sq66" event={"ID":"637baf2f-239a-405a-8cde-a46bf3f7877d","Type":"ContainerDied","Data":"0dc7b6a8a6cf9d738fa028fb17aa963e4e95235ea0dce8d557b9f2d4005387b2"} Dec 02 13:47:08 crc kubenswrapper[4625]: I1202 13:47:08.922430 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5sq66" event={"ID":"637baf2f-239a-405a-8cde-a46bf3f7877d","Type":"ContainerStarted","Data":"35e1eb07541dd498e36549b8c080a5c17dce527f0e6d951e6dbdc272273c9ca4"} Dec 02 13:47:08 crc kubenswrapper[4625]: I1202 13:47:08.924282 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:08 crc kubenswrapper[4625]: I1202 13:47:08.924345 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:08 crc kubenswrapper[4625]: I1202 13:47:08.924730 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:47:09 crc kubenswrapper[4625]: I1202 13:47:09.106842 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:09 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:09 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:09 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:09 crc kubenswrapper[4625]: I1202 13:47:09.106923 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:09 crc kubenswrapper[4625]: I1202 13:47:09.957303 4625 patch_prober.go:28] interesting pod/console-f9d7485db-pr728 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 02 13:47:09 crc kubenswrapper[4625]: I1202 13:47:09.957422 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pr728" podUID="15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 02 13:47:10 crc kubenswrapper[4625]: I1202 13:47:10.036580 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:10 crc kubenswrapper[4625]: I1202 13:47:10.036688 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:10 crc kubenswrapper[4625]: I1202 13:47:10.104575 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:10 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:10 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:10 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:10 crc kubenswrapper[4625]: I1202 13:47:10.104667 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:11 crc kubenswrapper[4625]: I1202 13:47:11.104666 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:11 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:11 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:11 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:11 crc kubenswrapper[4625]: I1202 13:47:11.104718 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:12 crc kubenswrapper[4625]: I1202 13:47:12.104877 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:12 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:12 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:12 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:12 crc kubenswrapper[4625]: I1202 13:47:12.105347 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:13 crc kubenswrapper[4625]: I1202 13:47:13.107846 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:13 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:13 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:13 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:13 crc kubenswrapper[4625]: I1202 13:47:13.107926 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:14 crc kubenswrapper[4625]: I1202 13:47:14.064715 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 02 13:47:14 crc kubenswrapper[4625]: I1202 13:47:14.102976 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:14 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:14 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:14 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:14 crc kubenswrapper[4625]: I1202 13:47:14.103673 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:15 crc kubenswrapper[4625]: I1202 13:47:15.106000 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:15 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:15 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:15 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:15 crc kubenswrapper[4625]: I1202 13:47:15.106122 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:15 crc kubenswrapper[4625]: I1202 13:47:15.918651 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:47:16 crc kubenswrapper[4625]: I1202 13:47:16.155966 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:16 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:16 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:16 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:16 crc kubenswrapper[4625]: I1202 13:47:16.157652 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:17 crc kubenswrapper[4625]: I1202 13:47:17.104088 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:17 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:17 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:17 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:17 crc kubenswrapper[4625]: I1202 13:47:17.104261 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:17 crc kubenswrapper[4625]: I1202 13:47:17.521868 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:17 crc kubenswrapper[4625]: I1202 13:47:17.522299 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:17 crc kubenswrapper[4625]: I1202 13:47:17.522370 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:17 crc kubenswrapper[4625]: I1202 13:47:17.522426 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:18 crc kubenswrapper[4625]: I1202 13:47:18.111482 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:18 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:18 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:18 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:18 crc kubenswrapper[4625]: I1202 13:47:18.111551 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:19 crc kubenswrapper[4625]: I1202 13:47:19.107931 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:19 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:19 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:19 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:19 crc kubenswrapper[4625]: I1202 13:47:19.108468 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:19 crc kubenswrapper[4625]: I1202 13:47:19.271026 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:47:19 crc kubenswrapper[4625]: I1202 13:47:19.271083 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:47:20 crc kubenswrapper[4625]: I1202 13:47:20.171566 4625 patch_prober.go:28] interesting pod/console-f9d7485db-pr728 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 02 13:47:20 crc kubenswrapper[4625]: I1202 13:47:20.171674 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pr728" podUID="15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 02 13:47:20 crc kubenswrapper[4625]: I1202 13:47:20.172214 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:20 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:20 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:20 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:20 crc kubenswrapper[4625]: I1202 13:47:20.172351 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:20 crc kubenswrapper[4625]: I1202 13:47:20.307176 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wqc6c" Dec 02 13:47:21 crc kubenswrapper[4625]: I1202 13:47:21.122404 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:21 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:21 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:21 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:21 crc kubenswrapper[4625]: I1202 13:47:21.122517 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:22 crc kubenswrapper[4625]: I1202 13:47:22.118192 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:22 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:22 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:22 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:22 crc kubenswrapper[4625]: I1202 13:47:22.118405 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:23 crc kubenswrapper[4625]: I1202 13:47:23.128748 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:23 crc kubenswrapper[4625]: [-]has-synced failed: reason withheld Dec 02 13:47:23 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:23 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:23 crc kubenswrapper[4625]: I1202 13:47:23.128893 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:24 crc kubenswrapper[4625]: I1202 13:47:24.104761 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 02 13:47:24 crc kubenswrapper[4625]: [+]has-synced ok Dec 02 13:47:24 crc kubenswrapper[4625]: [+]process-running ok Dec 02 13:47:24 crc kubenswrapper[4625]: healthz check failed Dec 02 13:47:24 crc kubenswrapper[4625]: I1202 13:47:24.106018 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 02 13:47:25 crc kubenswrapper[4625]: I1202 13:47:25.211799 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:47:25 crc kubenswrapper[4625]: I1202 13:47:25.215710 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pqzl9" Dec 02 13:47:27 crc kubenswrapper[4625]: I1202 13:47:27.537417 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:27 crc kubenswrapper[4625]: I1202 13:47:27.537860 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:27 crc kubenswrapper[4625]: I1202 13:47:27.538169 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:27 crc kubenswrapper[4625]: I1202 13:47:27.538188 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.410485 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 13:47:30 crc kubenswrapper[4625]: E1202 13:47:30.410814 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebeb796e-22f3-4ec9-ac27-f966a4078864" containerName="pruner" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.410828 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebeb796e-22f3-4ec9-ac27-f966a4078864" containerName="pruner" Dec 02 13:47:30 crc kubenswrapper[4625]: E1202 13:47:30.410840 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b3568a-5997-46ee-99b9-89fb517328cc" containerName="pruner" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.410846 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b3568a-5997-46ee-99b9-89fb517328cc" containerName="pruner" Dec 02 13:47:30 crc kubenswrapper[4625]: E1202 13:47:30.410875 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bad892-59d1-45b5-a388-156353675860" containerName="collect-profiles" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.410883 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bad892-59d1-45b5-a388-156353675860" containerName="collect-profiles" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.411000 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8bad892-59d1-45b5-a388-156353675860" containerName="collect-profiles" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.411015 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b3568a-5997-46ee-99b9-89fb517328cc" containerName="pruner" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.411026 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebeb796e-22f3-4ec9-ac27-f966a4078864" containerName="pruner" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.411628 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.423503 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.423626 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.425643 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.477701 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.501702 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.579524 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.579615 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.684471 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.684556 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.684676 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.707408 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:47:30 crc kubenswrapper[4625]: I1202 13:47:30.808173 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:47:35 crc kubenswrapper[4625]: I1202 13:47:35.878915 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 13:47:35 crc kubenswrapper[4625]: I1202 13:47:35.881076 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:35 crc kubenswrapper[4625]: I1202 13:47:35.894686 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 13:47:35 crc kubenswrapper[4625]: I1202 13:47:35.967484 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:35 crc kubenswrapper[4625]: I1202 13:47:35.967595 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-var-lock\") pod \"installer-9-crc\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:35 crc kubenswrapper[4625]: I1202 13:47:35.967655 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a853ad1c-016e-4012-a450-f79c3585216b-kube-api-access\") pod \"installer-9-crc\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:36 crc kubenswrapper[4625]: I1202 13:47:36.068875 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:36 crc kubenswrapper[4625]: I1202 13:47:36.069047 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:36 crc kubenswrapper[4625]: I1202 13:47:36.069540 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-var-lock\") pod \"installer-9-crc\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:36 crc kubenswrapper[4625]: I1202 13:47:36.069699 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a853ad1c-016e-4012-a450-f79c3585216b-kube-api-access\") pod \"installer-9-crc\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:36 crc kubenswrapper[4625]: I1202 13:47:36.069598 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-var-lock\") pod \"installer-9-crc\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:36 crc kubenswrapper[4625]: I1202 13:47:36.125106 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a853ad1c-016e-4012-a450-f79c3585216b-kube-api-access\") pod \"installer-9-crc\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:36 crc kubenswrapper[4625]: I1202 13:47:36.202593 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:47:37 crc kubenswrapper[4625]: I1202 13:47:37.522743 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:37 crc kubenswrapper[4625]: I1202 13:47:37.523136 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:37 crc kubenswrapper[4625]: I1202 13:47:37.523231 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:47:37 crc kubenswrapper[4625]: I1202 13:47:37.523981 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"35e1eb07541dd498e36549b8c080a5c17dce527f0e6d951e6dbdc272273c9ca4"} pod="openshift-console/downloads-7954f5f757-5sq66" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 02 13:47:37 crc kubenswrapper[4625]: I1202 13:47:37.524031 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" containerID="cri-o://35e1eb07541dd498e36549b8c080a5c17dce527f0e6d951e6dbdc272273c9ca4" gracePeriod=2 Dec 02 13:47:37 crc kubenswrapper[4625]: I1202 13:47:37.559294 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:37 crc kubenswrapper[4625]: I1202 13:47:37.559389 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:37 crc kubenswrapper[4625]: I1202 13:47:37.559732 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:37 crc kubenswrapper[4625]: I1202 13:47:37.559764 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:39 crc kubenswrapper[4625]: I1202 13:47:39.966465 4625 generic.go:334] "Generic (PLEG): container finished" podID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerID="35e1eb07541dd498e36549b8c080a5c17dce527f0e6d951e6dbdc272273c9ca4" exitCode=0 Dec 02 13:47:39 crc kubenswrapper[4625]: I1202 13:47:39.966536 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5sq66" event={"ID":"637baf2f-239a-405a-8cde-a46bf3f7877d","Type":"ContainerDied","Data":"35e1eb07541dd498e36549b8c080a5c17dce527f0e6d951e6dbdc272273c9ca4"} Dec 02 13:47:39 crc kubenswrapper[4625]: I1202 13:47:39.966867 4625 scope.go:117] "RemoveContainer" containerID="0dc7b6a8a6cf9d738fa028fb17aa963e4e95235ea0dce8d557b9f2d4005387b2" Dec 02 13:47:47 crc kubenswrapper[4625]: I1202 13:47:47.521446 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:47 crc kubenswrapper[4625]: I1202 13:47:47.521820 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:49 crc kubenswrapper[4625]: I1202 13:47:49.271080 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:47:49 crc kubenswrapper[4625]: I1202 13:47:49.271635 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:47:49 crc kubenswrapper[4625]: I1202 13:47:49.271703 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:47:49 crc kubenswrapper[4625]: I1202 13:47:49.272523 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 13:47:49 crc kubenswrapper[4625]: I1202 13:47:49.272575 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2" gracePeriod=600 Dec 02 13:47:50 crc kubenswrapper[4625]: I1202 13:47:50.195417 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2" exitCode=0 Dec 02 13:47:50 crc kubenswrapper[4625]: I1202 13:47:50.195489 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2"} Dec 02 13:47:54 crc kubenswrapper[4625]: E1202 13:47:54.564383 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 02 13:47:54 crc kubenswrapper[4625]: E1202 13:47:54.564832 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npdlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r84nn_openshift-marketplace(039b4452-411a-43c5-9823-860c079e5de3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:47:54 crc kubenswrapper[4625]: E1202 13:47:54.566703 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-r84nn" podUID="039b4452-411a-43c5-9823-860c079e5de3" Dec 02 13:47:55 crc kubenswrapper[4625]: E1202 13:47:55.873337 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r84nn" podUID="039b4452-411a-43c5-9823-860c079e5de3" Dec 02 13:47:55 crc kubenswrapper[4625]: E1202 13:47:55.945031 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 02 13:47:55 crc kubenswrapper[4625]: E1202 13:47:55.945271 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqzw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-77pll_openshift-marketplace(a736316a-06cf-4768-bb70-f5c9ed61de8f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:47:55 crc kubenswrapper[4625]: E1202 13:47:55.947455 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-77pll" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" Dec 02 13:47:57 crc kubenswrapper[4625]: I1202 13:47:57.521660 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:47:57 crc kubenswrapper[4625]: I1202 13:47:57.522043 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:47:57 crc kubenswrapper[4625]: E1202 13:47:57.796177 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-77pll" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" Dec 02 13:47:57 crc kubenswrapper[4625]: E1202 13:47:57.866230 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 02 13:47:57 crc kubenswrapper[4625]: E1202 13:47:57.866590 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99rgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ds7zw_openshift-marketplace(b8ede536-4ca2-48e0-ac63-7efdd3ec5de7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:47:57 crc kubenswrapper[4625]: E1202 13:47:57.867803 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ds7zw" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" Dec 02 13:47:59 crc kubenswrapper[4625]: I1202 13:47:59.974154 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x8tnt"] Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.622855 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ds7zw" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.718905 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.719128 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77bq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zwb28_openshift-marketplace(4d302da4-c96b-4efd-be3e-104812b4adfa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.720394 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zwb28" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.740676 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.740952 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ws49j_openshift-marketplace(e24375bb-53a2-4ee7-992e-4d57c2293536): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.742255 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ws49j" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.758122 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.758323 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ndbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cnz8w_openshift-marketplace(b3dd5657-6642-43f3-922f-37dea47fe07a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.760091 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cnz8w" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.853534 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.854533 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmlvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5pmg6_openshift-marketplace(36ff365b-030a-4ee4-9819-c5c41464213d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.855674 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5pmg6" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.872549 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.872767 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2dgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lb2rr_openshift-marketplace(9ad29b6a-7f18-4ed4-9f10-25f93fecb421): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 13:48:03 crc kubenswrapper[4625]: E1202 13:48:03.874620 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lb2rr" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" Dec 02 13:48:04 crc kubenswrapper[4625]: I1202 13:48:04.107129 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 02 13:48:04 crc kubenswrapper[4625]: I1202 13:48:04.171412 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 02 13:48:04 crc kubenswrapper[4625]: I1202 13:48:04.333000 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e","Type":"ContainerStarted","Data":"10953ed89f71d41c6c5ed6d1684e0c23641322d49ff7cb320ff01a7b555a1d5c"} Dec 02 13:48:04 crc kubenswrapper[4625]: I1202 13:48:04.339005 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"90afaf702b407cb71259af2ad7b1c5b8d7e4cfd9bc8d832ed3732d63ee2b7839"} Dec 02 13:48:04 crc kubenswrapper[4625]: I1202 13:48:04.342282 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5sq66" event={"ID":"637baf2f-239a-405a-8cde-a46bf3f7877d","Type":"ContainerStarted","Data":"9c99f258490606b06efbd4cac08f6b7f27ee7d47a9f1f09aa655ca534aa917ad"} Dec 02 13:48:04 crc kubenswrapper[4625]: I1202 13:48:04.342745 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:48:04 crc kubenswrapper[4625]: I1202 13:48:04.345653 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:48:04 crc kubenswrapper[4625]: I1202 13:48:04.345707 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:48:04 crc kubenswrapper[4625]: I1202 13:48:04.347124 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a853ad1c-016e-4012-a450-f79c3585216b","Type":"ContainerStarted","Data":"2dec17899b5191cd5d9ae71f9f18b70ddd211971844d69176a7eb54b59622a0c"} Dec 02 13:48:04 crc kubenswrapper[4625]: E1202 13:48:04.349269 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zwb28" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" Dec 02 13:48:04 crc kubenswrapper[4625]: E1202 13:48:04.349690 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cnz8w" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" Dec 02 13:48:04 crc kubenswrapper[4625]: E1202 13:48:04.350153 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ws49j" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" Dec 02 13:48:04 crc kubenswrapper[4625]: E1202 13:48:04.350268 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lb2rr" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" Dec 02 13:48:04 crc kubenswrapper[4625]: E1202 13:48:04.351009 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pmg6" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" Dec 02 13:48:05 crc kubenswrapper[4625]: I1202 13:48:05.354203 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e","Type":"ContainerStarted","Data":"597a68f65adc8aa9a9aba7ebc6aedeb793936c6b0e7dd703c6f627420e933e1d"} Dec 02 13:48:05 crc kubenswrapper[4625]: I1202 13:48:05.355996 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a853ad1c-016e-4012-a450-f79c3585216b","Type":"ContainerStarted","Data":"fc12813b05875fb9f5d9172dc36184f2ee9225ed2e187b2879529fbbbbd98e49"} Dec 02 13:48:05 crc kubenswrapper[4625]: I1202 13:48:05.357229 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:48:05 crc kubenswrapper[4625]: I1202 13:48:05.357294 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:48:05 crc kubenswrapper[4625]: I1202 13:48:05.376186 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=35.376159238 podStartE2EDuration="35.376159238s" podCreationTimestamp="2025-12-02 13:47:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:48:05.372889751 +0000 UTC m=+241.335066826" watchObservedRunningTime="2025-12-02 13:48:05.376159238 +0000 UTC m=+241.338336333" Dec 02 13:48:05 crc kubenswrapper[4625]: I1202 13:48:05.419230 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=30.419198598 podStartE2EDuration="30.419198598s" podCreationTimestamp="2025-12-02 13:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:48:05.406796792 +0000 UTC m=+241.368973887" watchObservedRunningTime="2025-12-02 13:48:05.419198598 +0000 UTC m=+241.381375673" Dec 02 13:48:06 crc kubenswrapper[4625]: I1202 13:48:06.363880 4625 generic.go:334] "Generic (PLEG): container finished" podID="a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e" containerID="597a68f65adc8aa9a9aba7ebc6aedeb793936c6b0e7dd703c6f627420e933e1d" exitCode=0 Dec 02 13:48:06 crc kubenswrapper[4625]: I1202 13:48:06.364021 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e","Type":"ContainerDied","Data":"597a68f65adc8aa9a9aba7ebc6aedeb793936c6b0e7dd703c6f627420e933e1d"} Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.522933 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.523879 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.522989 4625 patch_prober.go:28] interesting pod/downloads-7954f5f757-5sq66 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.524140 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5sq66" podUID="637baf2f-239a-405a-8cde-a46bf3f7877d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.615929 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.790268 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kube-api-access\") pod \"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e\" (UID: \"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e\") " Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.790346 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kubelet-dir\") pod \"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e\" (UID: \"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e\") " Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.790411 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e" (UID: "a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.790662 4625 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.798748 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e" (UID: "a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:07 crc kubenswrapper[4625]: I1202 13:48:07.892324 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:08 crc kubenswrapper[4625]: I1202 13:48:08.376996 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e","Type":"ContainerDied","Data":"10953ed89f71d41c6c5ed6d1684e0c23641322d49ff7cb320ff01a7b555a1d5c"} Dec 02 13:48:08 crc kubenswrapper[4625]: I1202 13:48:08.377036 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10953ed89f71d41c6c5ed6d1684e0c23641322d49ff7cb320ff01a7b555a1d5c" Dec 02 13:48:08 crc kubenswrapper[4625]: I1202 13:48:08.377088 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 02 13:48:17 crc kubenswrapper[4625]: I1202 13:48:17.527906 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5sq66" Dec 02 13:48:21 crc kubenswrapper[4625]: I1202 13:48:21.482982 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77pll" event={"ID":"a736316a-06cf-4768-bb70-f5c9ed61de8f","Type":"ContainerStarted","Data":"4a85a3358a925efd7ff5f6090d702f77db3281fc5cb965356015d4f7bf7f1870"} Dec 02 13:48:21 crc kubenswrapper[4625]: I1202 13:48:21.484621 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r84nn" event={"ID":"039b4452-411a-43c5-9823-860c079e5de3","Type":"ContainerStarted","Data":"1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770"} Dec 02 13:48:22 crc kubenswrapper[4625]: I1202 13:48:22.494697 4625 generic.go:334] "Generic (PLEG): container finished" podID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerID="4a85a3358a925efd7ff5f6090d702f77db3281fc5cb965356015d4f7bf7f1870" exitCode=0 Dec 02 13:48:22 crc kubenswrapper[4625]: I1202 13:48:22.494810 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77pll" event={"ID":"a736316a-06cf-4768-bb70-f5c9ed61de8f","Type":"ContainerDied","Data":"4a85a3358a925efd7ff5f6090d702f77db3281fc5cb965356015d4f7bf7f1870"} Dec 02 13:48:22 crc kubenswrapper[4625]: I1202 13:48:22.501932 4625 generic.go:334] "Generic (PLEG): container finished" podID="039b4452-411a-43c5-9823-860c079e5de3" containerID="1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770" exitCode=0 Dec 02 13:48:22 crc kubenswrapper[4625]: I1202 13:48:22.502109 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r84nn" event={"ID":"039b4452-411a-43c5-9823-860c079e5de3","Type":"ContainerDied","Data":"1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770"} Dec 02 13:48:24 crc kubenswrapper[4625]: I1202 13:48:24.526253 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds7zw" event={"ID":"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7","Type":"ContainerStarted","Data":"353646e2114f73f4554d843ebab41ae1863363342dfba615bc2dc33143c2d2f9"} Dec 02 13:48:25 crc kubenswrapper[4625]: I1202 13:48:25.248654 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" podUID="56474967-807c-4a3e-8037-45dfb0b88fe2" containerName="oauth-openshift" containerID="cri-o://c367a9bf3f1e0e9a6d1a4f58c11726693ed797f2f4fd800b3bc4ea0dc15c5c99" gracePeriod=15 Dec 02 13:48:25 crc kubenswrapper[4625]: I1202 13:48:25.545511 4625 generic.go:334] "Generic (PLEG): container finished" podID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerID="bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587" exitCode=0 Dec 02 13:48:25 crc kubenswrapper[4625]: I1202 13:48:25.545719 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws49j" event={"ID":"e24375bb-53a2-4ee7-992e-4d57c2293536","Type":"ContainerDied","Data":"bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587"} Dec 02 13:48:25 crc kubenswrapper[4625]: I1202 13:48:25.553538 4625 generic.go:334] "Generic (PLEG): container finished" podID="56474967-807c-4a3e-8037-45dfb0b88fe2" containerID="c367a9bf3f1e0e9a6d1a4f58c11726693ed797f2f4fd800b3bc4ea0dc15c5c99" exitCode=0 Dec 02 13:48:25 crc kubenswrapper[4625]: I1202 13:48:25.553663 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" event={"ID":"56474967-807c-4a3e-8037-45dfb0b88fe2","Type":"ContainerDied","Data":"c367a9bf3f1e0e9a6d1a4f58c11726693ed797f2f4fd800b3bc4ea0dc15c5c99"} Dec 02 13:48:25 crc kubenswrapper[4625]: I1202 13:48:25.558181 4625 generic.go:334] "Generic (PLEG): container finished" podID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerID="353646e2114f73f4554d843ebab41ae1863363342dfba615bc2dc33143c2d2f9" exitCode=0 Dec 02 13:48:25 crc kubenswrapper[4625]: I1202 13:48:25.558280 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds7zw" event={"ID":"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7","Type":"ContainerDied","Data":"353646e2114f73f4554d843ebab41ae1863363342dfba615bc2dc33143c2d2f9"} Dec 02 13:48:25 crc kubenswrapper[4625]: I1202 13:48:25.560199 4625 generic.go:334] "Generic (PLEG): container finished" podID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerID="8788e5ecab56d12c80963f9493537f1efecdf0bb37b834b037aced0d9b87e844" exitCode=0 Dec 02 13:48:25 crc kubenswrapper[4625]: I1202 13:48:25.560232 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb2rr" event={"ID":"9ad29b6a-7f18-4ed4-9f10-25f93fecb421","Type":"ContainerDied","Data":"8788e5ecab56d12c80963f9493537f1efecdf0bb37b834b037aced0d9b87e844"} Dec 02 13:48:25 crc kubenswrapper[4625]: I1202 13:48:25.882416 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:25.977119 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-64957f4549-n4mlt"] Dec 02 13:48:26 crc kubenswrapper[4625]: E1202 13:48:25.977521 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e" containerName="pruner" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:25.977537 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e" containerName="pruner" Dec 02 13:48:26 crc kubenswrapper[4625]: E1202 13:48:25.977559 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56474967-807c-4a3e-8037-45dfb0b88fe2" containerName="oauth-openshift" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:25.977566 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="56474967-807c-4a3e-8037-45dfb0b88fe2" containerName="oauth-openshift" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:25.977721 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="56474967-807c-4a3e-8037-45dfb0b88fe2" containerName="oauth-openshift" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:25.977735 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6529aa9-fb51-4779-8d1f-1d21fa8d6a5e" containerName="pruner" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:25.978340 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.096725 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-cliconfig\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.096779 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-service-ca\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.096810 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-policies\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.096866 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-router-certs\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.096910 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-provider-selection\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097030 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-error\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097089 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmmf4\" (UniqueName: \"kubernetes.io/projected/56474967-807c-4a3e-8037-45dfb0b88fe2-kube-api-access-wmmf4\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097113 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-dir\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097141 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-session\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097169 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-idp-0-file-data\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097209 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-ocp-branding-template\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097229 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-serving-cert\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097269 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-trusted-ca-bundle\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097287 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-login\") pod \"56474967-807c-4a3e-8037-45dfb0b88fe2\" (UID: \"56474967-807c-4a3e-8037-45dfb0b88fe2\") " Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097476 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097513 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-session\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097546 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097565 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh9p4\" (UniqueName: \"kubernetes.io/projected/25062044-783a-4b03-befd-c9af31236749-kube-api-access-gh9p4\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097598 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097619 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25062044-783a-4b03-befd-c9af31236749-audit-dir\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097640 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-template-login\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097700 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-router-certs\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097724 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097759 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-service-ca\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097797 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097831 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-audit-policies\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097853 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-template-error\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.097888 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.099263 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.099542 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.100836 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.108984 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.110221 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.200566 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-router-certs\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.200668 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.200712 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-service-ca\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.200760 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.200798 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-audit-policies\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.200926 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-template-error\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.200987 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201060 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201108 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-session\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201152 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201198 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh9p4\" (UniqueName: \"kubernetes.io/projected/25062044-783a-4b03-befd-c9af31236749-kube-api-access-gh9p4\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201264 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201368 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25062044-783a-4b03-befd-c9af31236749-audit-dir\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201435 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-template-login\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201582 4625 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201619 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201644 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201672 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.201697 4625 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56474967-807c-4a3e-8037-45dfb0b88fe2-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.208157 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-service-ca\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.211543 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.212339 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.212647 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-template-login\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.230483 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25062044-783a-4b03-befd-c9af31236749-audit-policies\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.231104 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.233554 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-router-certs\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.251183 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-session\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.253501 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25062044-783a-4b03-befd-c9af31236749-audit-dir\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.254807 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.262598 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64957f4549-n4mlt"] Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.306179 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.307214 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-template-error\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.309936 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56474967-807c-4a3e-8037-45dfb0b88fe2-kube-api-access-wmmf4" (OuterVolumeSpecName: "kube-api-access-wmmf4") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "kube-api-access-wmmf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.311652 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.312362 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmmf4\" (UniqueName: \"kubernetes.io/projected/56474967-807c-4a3e-8037-45dfb0b88fe2-kube-api-access-wmmf4\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.312508 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.315398 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.315524 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.316947 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/25062044-783a-4b03-befd-c9af31236749-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.317265 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh9p4\" (UniqueName: \"kubernetes.io/projected/25062044-783a-4b03-befd-c9af31236749-kube-api-access-gh9p4\") pod \"oauth-openshift-64957f4549-n4mlt\" (UID: \"25062044-783a-4b03-befd-c9af31236749\") " pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.317489 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.320816 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.417108 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.417182 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.417197 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.417211 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.417435 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.517889 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.519854 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.520224 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "56474967-807c-4a3e-8037-45dfb0b88fe2" (UID: "56474967-807c-4a3e-8037-45dfb0b88fe2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.520727 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.520786 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.520802 4625 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56474967-807c-4a3e-8037-45dfb0b88fe2-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.577172 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pmg6" event={"ID":"36ff365b-030a-4ee4-9819-c5c41464213d","Type":"ContainerStarted","Data":"fa564fcb2099dcb27e38c8e170b12f47d0304b1cd480eed81f44f93db65952b4"} Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.583417 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnz8w" event={"ID":"b3dd5657-6642-43f3-922f-37dea47fe07a","Type":"ContainerStarted","Data":"4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a"} Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.589506 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r84nn" event={"ID":"039b4452-411a-43c5-9823-860c079e5de3","Type":"ContainerStarted","Data":"b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb"} Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.591874 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" event={"ID":"56474967-807c-4a3e-8037-45dfb0b88fe2","Type":"ContainerDied","Data":"4b52beddf23179e47ff8dccc085c5454f85df36edb2aa7f38f09cc2527f1e19d"} Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.591939 4625 scope.go:117] "RemoveContainer" containerID="c367a9bf3f1e0e9a6d1a4f58c11726693ed797f2f4fd800b3bc4ea0dc15c5c99" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.592092 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-x8tnt" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.620232 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwb28" event={"ID":"4d302da4-c96b-4efd-be3e-104812b4adfa","Type":"ContainerStarted","Data":"697d4f9f321cc6f112101e7dd5f1e3d51e76a2b5b7a9abc91daaad718753d269"} Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.638135 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77pll" event={"ID":"a736316a-06cf-4768-bb70-f5c9ed61de8f","Type":"ContainerStarted","Data":"0388556eb45ea4134affbb114ce7b1e8d52eb08118e613079cbdcadb68cd1afc"} Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.749697 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-77pll" podStartSLOduration=6.358749802 podStartE2EDuration="1m29.749674557s" podCreationTimestamp="2025-12-02 13:46:57 +0000 UTC" firstStartedPulling="2025-12-02 13:47:01.418490325 +0000 UTC m=+177.380667400" lastFinishedPulling="2025-12-02 13:48:24.80941508 +0000 UTC m=+260.771592155" observedRunningTime="2025-12-02 13:48:26.745931838 +0000 UTC m=+262.708108913" watchObservedRunningTime="2025-12-02 13:48:26.749674557 +0000 UTC m=+262.711851632" Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.900621 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x8tnt"] Dec 02 13:48:26 crc kubenswrapper[4625]: I1202 13:48:26.900677 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-x8tnt"] Dec 02 13:48:27 crc kubenswrapper[4625]: I1202 13:48:27.709358 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:48:27 crc kubenswrapper[4625]: I1202 13:48:27.709875 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:48:27 crc kubenswrapper[4625]: I1202 13:48:27.812089 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds7zw" event={"ID":"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7","Type":"ContainerStarted","Data":"b6ecacc018ddbe7bc6af5a2858a3b5032672da150a18d969c69cd6535c3d1c5c"} Dec 02 13:48:27 crc kubenswrapper[4625]: I1202 13:48:27.927907 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r84nn" podStartSLOduration=6.572925059 podStartE2EDuration="1m32.927879649s" podCreationTimestamp="2025-12-02 13:46:55 +0000 UTC" firstStartedPulling="2025-12-02 13:46:59.032796133 +0000 UTC m=+174.994973208" lastFinishedPulling="2025-12-02 13:48:25.387750723 +0000 UTC m=+261.349927798" observedRunningTime="2025-12-02 13:48:26.89574484 +0000 UTC m=+262.857921905" watchObservedRunningTime="2025-12-02 13:48:27.927879649 +0000 UTC m=+263.890056724" Dec 02 13:48:27 crc kubenswrapper[4625]: I1202 13:48:27.929867 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ds7zw" podStartSLOduration=6.908566911 podStartE2EDuration="1m32.929853237s" podCreationTimestamp="2025-12-02 13:46:55 +0000 UTC" firstStartedPulling="2025-12-02 13:47:00.212596795 +0000 UTC m=+176.174773870" lastFinishedPulling="2025-12-02 13:48:26.233883121 +0000 UTC m=+262.196060196" observedRunningTime="2025-12-02 13:48:27.924445567 +0000 UTC m=+263.886622652" watchObservedRunningTime="2025-12-02 13:48:27.929853237 +0000 UTC m=+263.892030302" Dec 02 13:48:28 crc kubenswrapper[4625]: I1202 13:48:28.979627 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56474967-807c-4a3e-8037-45dfb0b88fe2" path="/var/lib/kubelet/pods/56474967-807c-4a3e-8037-45dfb0b88fe2/volumes" Dec 02 13:48:29 crc kubenswrapper[4625]: I1202 13:48:29.123151 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws49j" event={"ID":"e24375bb-53a2-4ee7-992e-4d57c2293536","Type":"ContainerStarted","Data":"787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8"} Dec 02 13:48:29 crc kubenswrapper[4625]: I1202 13:48:29.132166 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb2rr" event={"ID":"9ad29b6a-7f18-4ed4-9f10-25f93fecb421","Type":"ContainerStarted","Data":"be99692a6483633abef9cc0877946709122aff2e122b1da1169a1e40ceb2195d"} Dec 02 13:48:29 crc kubenswrapper[4625]: I1202 13:48:29.186292 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ws49j" podStartSLOduration=6.546528058 podStartE2EDuration="1m34.186262218s" podCreationTimestamp="2025-12-02 13:46:55 +0000 UTC" firstStartedPulling="2025-12-02 13:47:00.155449252 +0000 UTC m=+176.117626327" lastFinishedPulling="2025-12-02 13:48:27.795183422 +0000 UTC m=+263.757360487" observedRunningTime="2025-12-02 13:48:29.186119634 +0000 UTC m=+265.148296719" watchObservedRunningTime="2025-12-02 13:48:29.186262218 +0000 UTC m=+265.148439293" Dec 02 13:48:29 crc kubenswrapper[4625]: I1202 13:48:29.519323 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lb2rr" podStartSLOduration=7.291658455 podStartE2EDuration="1m32.519280058s" podCreationTimestamp="2025-12-02 13:46:57 +0000 UTC" firstStartedPulling="2025-12-02 13:47:01.456081637 +0000 UTC m=+177.418258712" lastFinishedPulling="2025-12-02 13:48:26.68370324 +0000 UTC m=+262.645880315" observedRunningTime="2025-12-02 13:48:29.512188179 +0000 UTC m=+265.474365274" watchObservedRunningTime="2025-12-02 13:48:29.519280058 +0000 UTC m=+265.481457133" Dec 02 13:48:30 crc kubenswrapper[4625]: I1202 13:48:30.994933 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-77pll" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerName="registry-server" probeResult="failure" output=< Dec 02 13:48:30 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 13:48:30 crc kubenswrapper[4625]: > Dec 02 13:48:31 crc kubenswrapper[4625]: I1202 13:48:31.928952 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64957f4549-n4mlt"] Dec 02 13:48:31 crc kubenswrapper[4625]: I1202 13:48:31.929326 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" event={"ID":"25062044-783a-4b03-befd-c9af31236749","Type":"ContainerStarted","Data":"f7821b5d9338906ba326102dde31a965a4e521c82114c05df413b781b3a4e9a6"} Dec 02 13:48:33 crc kubenswrapper[4625]: I1202 13:48:33.446796 4625 generic.go:334] "Generic (PLEG): container finished" podID="36ff365b-030a-4ee4-9819-c5c41464213d" containerID="fa564fcb2099dcb27e38c8e170b12f47d0304b1cd480eed81f44f93db65952b4" exitCode=0 Dec 02 13:48:33 crc kubenswrapper[4625]: I1202 13:48:33.446911 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pmg6" event={"ID":"36ff365b-030a-4ee4-9819-c5c41464213d","Type":"ContainerDied","Data":"fa564fcb2099dcb27e38c8e170b12f47d0304b1cd480eed81f44f93db65952b4"} Dec 02 13:48:34 crc kubenswrapper[4625]: I1202 13:48:34.453894 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" event={"ID":"25062044-783a-4b03-befd-c9af31236749","Type":"ContainerStarted","Data":"2498ef597315f8a7c26e0edb8462acfac4e443848fc82ce3743a618acfcfa6cb"} Dec 02 13:48:34 crc kubenswrapper[4625]: I1202 13:48:34.455348 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:35 crc kubenswrapper[4625]: I1202 13:48:35.455379 4625 patch_prober.go:28] interesting pod/oauth-openshift-64957f4549-n4mlt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 13:48:35 crc kubenswrapper[4625]: I1202 13:48:35.456959 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" podUID="25062044-783a-4b03-befd-c9af31236749" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 02 13:48:35 crc kubenswrapper[4625]: I1202 13:48:35.660601 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" Dec 02 13:48:35 crc kubenswrapper[4625]: I1202 13:48:35.682927 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-64957f4549-n4mlt" podStartSLOduration=35.682903066 podStartE2EDuration="35.682903066s" podCreationTimestamp="2025-12-02 13:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:48:34.516807482 +0000 UTC m=+270.478984557" watchObservedRunningTime="2025-12-02 13:48:35.682903066 +0000 UTC m=+271.645080141" Dec 02 13:48:35 crc kubenswrapper[4625]: I1202 13:48:35.727708 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:48:35 crc kubenswrapper[4625]: I1202 13:48:35.727756 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:48:35 crc kubenswrapper[4625]: I1202 13:48:35.941671 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:48:35 crc kubenswrapper[4625]: I1202 13:48:35.942298 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:48:36 crc kubenswrapper[4625]: I1202 13:48:36.199618 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:48:36 crc kubenswrapper[4625]: I1202 13:48:36.199996 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:48:36 crc kubenswrapper[4625]: I1202 13:48:36.989710 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r84nn" podUID="039b4452-411a-43c5-9823-860c079e5de3" containerName="registry-server" probeResult="failure" output=< Dec 02 13:48:36 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 13:48:36 crc kubenswrapper[4625]: > Dec 02 13:48:37 crc kubenswrapper[4625]: I1202 13:48:37.053934 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ds7zw" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerName="registry-server" probeResult="failure" output=< Dec 02 13:48:37 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 13:48:37 crc kubenswrapper[4625]: > Dec 02 13:48:37 crc kubenswrapper[4625]: I1202 13:48:37.507465 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ws49j" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerName="registry-server" probeResult="failure" output=< Dec 02 13:48:37 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 13:48:37 crc kubenswrapper[4625]: > Dec 02 13:48:37 crc kubenswrapper[4625]: I1202 13:48:37.564147 4625 generic.go:334] "Generic (PLEG): container finished" podID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerID="4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a" exitCode=0 Dec 02 13:48:37 crc kubenswrapper[4625]: I1202 13:48:37.564210 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnz8w" event={"ID":"b3dd5657-6642-43f3-922f-37dea47fe07a","Type":"ContainerDied","Data":"4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a"} Dec 02 13:48:37 crc kubenswrapper[4625]: I1202 13:48:37.569002 4625 generic.go:334] "Generic (PLEG): container finished" podID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerID="697d4f9f321cc6f112101e7dd5f1e3d51e76a2b5b7a9abc91daaad718753d269" exitCode=0 Dec 02 13:48:37 crc kubenswrapper[4625]: I1202 13:48:37.569909 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwb28" event={"ID":"4d302da4-c96b-4efd-be3e-104812b4adfa","Type":"ContainerDied","Data":"697d4f9f321cc6f112101e7dd5f1e3d51e76a2b5b7a9abc91daaad718753d269"} Dec 02 13:48:37 crc kubenswrapper[4625]: I1202 13:48:37.991680 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:48:37 crc kubenswrapper[4625]: I1202 13:48:37.994136 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:48:38 crc kubenswrapper[4625]: I1202 13:48:38.044325 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:48:38 crc kubenswrapper[4625]: I1202 13:48:38.061960 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:48:38 crc kubenswrapper[4625]: I1202 13:48:38.295542 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:48:38 crc kubenswrapper[4625]: I1202 13:48:38.627749 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:48:39 crc kubenswrapper[4625]: I1202 13:48:39.586553 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pmg6" event={"ID":"36ff365b-030a-4ee4-9819-c5c41464213d","Type":"ContainerStarted","Data":"bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36"} Dec 02 13:48:39 crc kubenswrapper[4625]: I1202 13:48:39.807904 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lb2rr"] Dec 02 13:48:40 crc kubenswrapper[4625]: I1202 13:48:40.598495 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwb28" event={"ID":"4d302da4-c96b-4efd-be3e-104812b4adfa","Type":"ContainerStarted","Data":"f719411005de7a9353f80da4d4f62c2862aa343d49d832879ff14745e96d178b"} Dec 02 13:48:40 crc kubenswrapper[4625]: I1202 13:48:40.601245 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnz8w" event={"ID":"b3dd5657-6642-43f3-922f-37dea47fe07a","Type":"ContainerStarted","Data":"1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd"} Dec 02 13:48:40 crc kubenswrapper[4625]: I1202 13:48:40.624645 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zwb28" podStartSLOduration=4.290685731 podStartE2EDuration="1m42.624621072s" podCreationTimestamp="2025-12-02 13:46:58 +0000 UTC" firstStartedPulling="2025-12-02 13:47:01.415038451 +0000 UTC m=+177.377215526" lastFinishedPulling="2025-12-02 13:48:39.748973792 +0000 UTC m=+275.711150867" observedRunningTime="2025-12-02 13:48:40.622812929 +0000 UTC m=+276.584990014" watchObservedRunningTime="2025-12-02 13:48:40.624621072 +0000 UTC m=+276.586798147" Dec 02 13:48:40 crc kubenswrapper[4625]: I1202 13:48:40.653618 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5pmg6" podStartSLOduration=7.51723474 podStartE2EDuration="1m45.653595828s" podCreationTimestamp="2025-12-02 13:46:55 +0000 UTC" firstStartedPulling="2025-12-02 13:47:00.255615973 +0000 UTC m=+176.217793048" lastFinishedPulling="2025-12-02 13:48:38.391977061 +0000 UTC m=+274.354154136" observedRunningTime="2025-12-02 13:48:40.650693472 +0000 UTC m=+276.612870557" watchObservedRunningTime="2025-12-02 13:48:40.653595828 +0000 UTC m=+276.615772903" Dec 02 13:48:40 crc kubenswrapper[4625]: I1202 13:48:40.676800 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cnz8w" podStartSLOduration=4.325245111 podStartE2EDuration="1m42.676769671s" podCreationTimestamp="2025-12-02 13:46:58 +0000 UTC" firstStartedPulling="2025-12-02 13:47:01.4410966 +0000 UTC m=+177.403273675" lastFinishedPulling="2025-12-02 13:48:39.79262116 +0000 UTC m=+275.754798235" observedRunningTime="2025-12-02 13:48:40.676146813 +0000 UTC m=+276.638323888" watchObservedRunningTime="2025-12-02 13:48:40.676769671 +0000 UTC m=+276.638946746" Dec 02 13:48:41 crc kubenswrapper[4625]: I1202 13:48:41.606206 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lb2rr" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerName="registry-server" containerID="cri-o://be99692a6483633abef9cc0877946709122aff2e122b1da1169a1e40ceb2195d" gracePeriod=2 Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.402062 4625 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.403059 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.434625 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.468255 4625 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.469002 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989" gracePeriod=15 Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.469036 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c" gracePeriod=15 Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.469036 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971" gracePeriod=15 Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.469148 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714" gracePeriod=15 Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.469061 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7" gracePeriod=15 Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.470099 4625 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 13:48:42 crc kubenswrapper[4625]: E1202 13:48:42.470545 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.470578 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 13:48:42 crc kubenswrapper[4625]: E1202 13:48:42.470602 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.470621 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 13:48:42 crc kubenswrapper[4625]: E1202 13:48:42.470645 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.470654 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 13:48:42 crc kubenswrapper[4625]: E1202 13:48:42.470681 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.470690 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:48:42 crc kubenswrapper[4625]: E1202 13:48:42.470716 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.470739 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:48:42 crc kubenswrapper[4625]: E1202 13:48:42.470766 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.470778 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 02 13:48:42 crc kubenswrapper[4625]: E1202 13:48:42.470806 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.470823 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.471070 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.471090 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.471104 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.471123 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.471136 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.471155 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.571808 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.571863 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.571946 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.573673 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.573737 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.622630 4625 generic.go:334] "Generic (PLEG): container finished" podID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerID="be99692a6483633abef9cc0877946709122aff2e122b1da1169a1e40ceb2195d" exitCode=0 Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.622691 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb2rr" event={"ID":"9ad29b6a-7f18-4ed4-9f10-25f93fecb421","Type":"ContainerDied","Data":"be99692a6483633abef9cc0877946709122aff2e122b1da1169a1e40ceb2195d"} Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675503 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675560 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675591 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675614 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675635 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675653 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675661 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675667 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675702 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675722 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675754 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675744 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.675779 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.731429 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.761471 4625 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.761504 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 02 13:48:42 crc kubenswrapper[4625]: W1202 13:48:42.766528 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-32737b6a98dc148e4f27c26783ccfb53a6e4abbb4878ffc8be73793cedd7a2b1 WatchSource:0}: Error finding container 32737b6a98dc148e4f27c26783ccfb53a6e4abbb4878ffc8be73793cedd7a2b1: Status 404 returned error can't find the container with id 32737b6a98dc148e4f27c26783ccfb53a6e4abbb4878ffc8be73793cedd7a2b1 Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.776596 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.777450 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.777616 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.777580 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.776960 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.777764 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.814544 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.857937 4625 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.982651 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-utilities\") pod \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.982723 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2dgk\" (UniqueName: \"kubernetes.io/projected/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-kube-api-access-d2dgk\") pod \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.982759 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-catalog-content\") pod \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\" (UID: \"9ad29b6a-7f18-4ed4-9f10-25f93fecb421\") " Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.983802 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-utilities" (OuterVolumeSpecName: "utilities") pod "9ad29b6a-7f18-4ed4-9f10-25f93fecb421" (UID: "9ad29b6a-7f18-4ed4-9f10-25f93fecb421"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:48:42 crc kubenswrapper[4625]: I1202 13:48:42.998998 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-kube-api-access-d2dgk" (OuterVolumeSpecName: "kube-api-access-d2dgk") pod "9ad29b6a-7f18-4ed4-9f10-25f93fecb421" (UID: "9ad29b6a-7f18-4ed4-9f10-25f93fecb421"). InnerVolumeSpecName "kube-api-access-d2dgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.004820 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ad29b6a-7f18-4ed4-9f10-25f93fecb421" (UID: "9ad29b6a-7f18-4ed4-9f10-25f93fecb421"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.084201 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.084238 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2dgk\" (UniqueName: \"kubernetes.io/projected/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-kube-api-access-d2dgk\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.084248 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad29b6a-7f18-4ed4-9f10-25f93fecb421-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:43 crc kubenswrapper[4625]: E1202 13:48:43.120357 4625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d6a21f7998259 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 13:48:43.119600217 +0000 UTC m=+279.081777292,LastTimestamp:2025-12-02 13:48:43.119600217 +0000 UTC m=+279.081777292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.629784 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9"} Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.630098 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"32737b6a98dc148e4f27c26783ccfb53a6e4abbb4878ffc8be73793cedd7a2b1"} Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.630692 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.632568 4625 generic.go:334] "Generic (PLEG): container finished" podID="a853ad1c-016e-4012-a450-f79c3585216b" containerID="fc12813b05875fb9f5d9172dc36184f2ee9225ed2e187b2879529fbbbbd98e49" exitCode=0 Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.632921 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a853ad1c-016e-4012-a450-f79c3585216b","Type":"ContainerDied","Data":"fc12813b05875fb9f5d9172dc36184f2ee9225ed2e187b2879529fbbbbd98e49"} Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.634597 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.635128 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.635478 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lb2rr" event={"ID":"9ad29b6a-7f18-4ed4-9f10-25f93fecb421","Type":"ContainerDied","Data":"b0430d5eaad15a860a9c6ba991dacd754c9813df4e269154996f4587eea12aae"} Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.635522 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lb2rr" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.635538 4625 scope.go:117] "RemoveContainer" containerID="be99692a6483633abef9cc0877946709122aff2e122b1da1169a1e40ceb2195d" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.636604 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.637596 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.637859 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.639709 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.644801 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.646033 4625 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7" exitCode=0 Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.646086 4625 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971" exitCode=0 Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.646094 4625 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c" exitCode=0 Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.646101 4625 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714" exitCode=2 Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.656464 4625 scope.go:117] "RemoveContainer" containerID="8788e5ecab56d12c80963f9493537f1efecdf0bb37b834b037aced0d9b87e844" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.661650 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.661868 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.662041 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.674677 4625 scope.go:117] "RemoveContainer" containerID="b5578198fc6bc265ee6b7509bf70811817e75fee06dc988842362c1a9432e5f9" Dec 02 13:48:43 crc kubenswrapper[4625]: I1202 13:48:43.697184 4625 scope.go:117] "RemoveContainer" containerID="f5d8256b5e778dd2a71619eab09fa0a2765ff9d6c8f085664abcc2a2a4c1d800" Dec 02 13:48:44 crc kubenswrapper[4625]: I1202 13:48:44.658887 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 13:48:44 crc kubenswrapper[4625]: I1202 13:48:44.883411 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:44 crc kubenswrapper[4625]: I1202 13:48:44.883858 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:44 crc kubenswrapper[4625]: I1202 13:48:44.884448 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.270362 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.271488 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.271771 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.272046 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.388207 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a853ad1c-016e-4012-a450-f79c3585216b-kube-api-access\") pod \"a853ad1c-016e-4012-a450-f79c3585216b\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.388289 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-kubelet-dir\") pod \"a853ad1c-016e-4012-a450-f79c3585216b\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.388373 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-var-lock\") pod \"a853ad1c-016e-4012-a450-f79c3585216b\" (UID: \"a853ad1c-016e-4012-a450-f79c3585216b\") " Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.388485 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a853ad1c-016e-4012-a450-f79c3585216b" (UID: "a853ad1c-016e-4012-a450-f79c3585216b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.388578 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-var-lock" (OuterVolumeSpecName: "var-lock") pod "a853ad1c-016e-4012-a450-f79c3585216b" (UID: "a853ad1c-016e-4012-a450-f79c3585216b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.389548 4625 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.389651 4625 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a853ad1c-016e-4012-a450-f79c3585216b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.395465 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a853ad1c-016e-4012-a450-f79c3585216b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a853ad1c-016e-4012-a450-f79c3585216b" (UID: "a853ad1c-016e-4012-a450-f79c3585216b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.490516 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a853ad1c-016e-4012-a450-f79c3585216b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.668833 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a853ad1c-016e-4012-a450-f79c3585216b","Type":"ContainerDied","Data":"2dec17899b5191cd5d9ae71f9f18b70ddd211971844d69176a7eb54b59622a0c"} Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.670643 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dec17899b5191cd5d9ae71f9f18b70ddd211971844d69176a7eb54b59622a0c" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.668885 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.689131 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.689763 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.690401 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.780617 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.781550 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.781967 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.783038 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.783302 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.830303 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.830858 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.831086 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.831251 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.831444 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:45 crc kubenswrapper[4625]: I1202 13:48:45.998238 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.000062 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.000371 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.000604 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.000847 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.001084 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.048784 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.049212 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.049462 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.049658 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.049952 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.050245 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.185107 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.185156 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.239790 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.240776 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.240964 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.241114 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.241259 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.241458 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.242099 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.245514 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.246522 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.246760 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.247217 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.247712 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.247895 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.248539 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.248814 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.300713 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.301611 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.301835 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.302541 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.302975 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.303455 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.303649 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.306914 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: E1202 13:48:46.570587 4625 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: E1202 13:48:46.571660 4625 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: E1202 13:48:46.572062 4625 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: E1202 13:48:46.572597 4625 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: E1202 13:48:46.573022 4625 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.573071 4625 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 02 13:48:46 crc kubenswrapper[4625]: E1202 13:48:46.573386 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="200ms" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.730094 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.730626 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.730855 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.731047 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.731251 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.731482 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.731671 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: I1202 13:48:46.731867 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:46 crc kubenswrapper[4625]: E1202 13:48:46.775121 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="400ms" Dec 02 13:48:47 crc kubenswrapper[4625]: E1202 13:48:47.176025 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="800ms" Dec 02 13:48:47 crc kubenswrapper[4625]: E1202 13:48:47.189427 4625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d6a21f7998259 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 13:48:43.119600217 +0000 UTC m=+279.081777292,LastTimestamp:2025-12-02 13:48:43.119600217 +0000 UTC m=+279.081777292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 13:48:49 crc kubenswrapper[4625]: E1202 13:48:47.977450 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="1.6s" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.682997 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.683967 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.727069 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.727710 4625 status_manager.go:851] "Failed to get status for pod" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" pod="openshift-marketplace/redhat-operators-zwb28" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zwb28\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.727970 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.728211 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.728510 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.728762 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.728956 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.729156 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:48.729356 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.061972 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.062020 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.107851 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.108557 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.109045 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.109848 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.110235 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.110656 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.110873 4625 status_manager.go:851] "Failed to get status for pod" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" pod="openshift-marketplace/redhat-operators-zwb28" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zwb28\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.111106 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.111280 4625 status_manager.go:851] "Failed to get status for pod" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" pod="openshift-marketplace/redhat-operators-cnz8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-cnz8w\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.111493 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: E1202 13:48:49.579009 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="3.2s" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.696946 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.698582 4625 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989" exitCode=0 Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.743544 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.744402 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.744615 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.744667 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.744823 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.745361 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.745733 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.745938 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.746116 4625 status_manager.go:851] "Failed to get status for pod" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" pod="openshift-marketplace/redhat-operators-zwb28" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zwb28\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.746281 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.746467 4625 status_manager.go:851] "Failed to get status for pod" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" pod="openshift-marketplace/redhat-operators-cnz8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-cnz8w\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.746669 4625 status_manager.go:851] "Failed to get status for pod" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" pod="openshift-marketplace/redhat-operators-zwb28" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zwb28\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.746829 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.746983 4625 status_manager.go:851] "Failed to get status for pod" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" pod="openshift-marketplace/redhat-operators-cnz8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-cnz8w\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.747128 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.747296 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.747661 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.747999 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.748346 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.748556 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.812696 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.813655 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.815528 4625 status_manager.go:851] "Failed to get status for pod" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" pod="openshift-marketplace/redhat-operators-zwb28" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zwb28\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.815907 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.816206 4625 status_manager.go:851] "Failed to get status for pod" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" pod="openshift-marketplace/redhat-operators-cnz8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-cnz8w\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.816765 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.817078 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.817366 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.817692 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.818020 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.818300 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.818654 4625 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.883623 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.883691 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.883748 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.883795 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.883809 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.883939 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.884084 4625 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.884106 4625 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:49 crc kubenswrapper[4625]: I1202 13:48:49.884123 4625 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.710280 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.711336 4625 scope.go:117] "RemoveContainer" containerID="6b5ba21fdaf54eabc73f1061187fd6cfc762ddd38cd8375a184d9351cd0ea2e7" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.711393 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.732999 4625 status_manager.go:851] "Failed to get status for pod" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" pod="openshift-marketplace/redhat-operators-zwb28" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zwb28\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.733389 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.733428 4625 scope.go:117] "RemoveContainer" containerID="fea43e161eb52629152f3226d8199edb9e368ed0b658668ccf5fe80e055a4971" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.733563 4625 status_manager.go:851] "Failed to get status for pod" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" pod="openshift-marketplace/redhat-operators-cnz8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-cnz8w\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.733764 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.733959 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.734112 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.734284 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.734506 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.734817 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.735107 4625 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.747576 4625 scope.go:117] "RemoveContainer" containerID="92a9e53c07907b5d6cd191754b62a28289ca93ecede05d21c1c8cee9aa722e9c" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.763233 4625 scope.go:117] "RemoveContainer" containerID="d560c3c2cafe1ccb805b2bf262da7191fae36bc27a2488e91ae6765059aef714" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.778610 4625 scope.go:117] "RemoveContainer" containerID="c9e5526cf8f3a468247ecd952ca00466ccab06fb8e41a36ebfb2e2d0f98a9989" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.797462 4625 scope.go:117] "RemoveContainer" containerID="4d5d344642362bb84cc89e2def227e2e673a3cbfb24685653adeb7dbf4e9f4c6" Dec 02 13:48:50 crc kubenswrapper[4625]: I1202 13:48:50.862875 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 02 13:48:52 crc kubenswrapper[4625]: E1202 13:48:52.780598 4625 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" interval="6.4s" Dec 02 13:48:54 crc kubenswrapper[4625]: I1202 13:48:54.860677 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:54 crc kubenswrapper[4625]: I1202 13:48:54.862413 4625 status_manager.go:851] "Failed to get status for pod" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" pod="openshift-marketplace/redhat-operators-cnz8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-cnz8w\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:54 crc kubenswrapper[4625]: I1202 13:48:54.862817 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:54 crc kubenswrapper[4625]: I1202 13:48:54.863239 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:54 crc kubenswrapper[4625]: I1202 13:48:54.863541 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:54 crc kubenswrapper[4625]: I1202 13:48:54.863764 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:54 crc kubenswrapper[4625]: I1202 13:48:54.863969 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:54 crc kubenswrapper[4625]: I1202 13:48:54.864182 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:54 crc kubenswrapper[4625]: I1202 13:48:54.864622 4625 status_manager.go:851] "Failed to get status for pod" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" pod="openshift-marketplace/redhat-operators-zwb28" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zwb28\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.855593 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.857858 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.858302 4625 status_manager.go:851] "Failed to get status for pod" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" pod="openshift-marketplace/redhat-operators-zwb28" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zwb28\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.858550 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.858773 4625 status_manager.go:851] "Failed to get status for pod" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" pod="openshift-marketplace/redhat-operators-cnz8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-cnz8w\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.859139 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.859842 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.860239 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.860715 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.861470 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.878603 4625 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce86a1bb-e2cd-4867-bf4e-297c2ff9f307" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.878666 4625 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce86a1bb-e2cd-4867-bf4e-297c2ff9f307" Dec 02 13:48:56 crc kubenswrapper[4625]: E1202 13:48:56.879582 4625 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:56 crc kubenswrapper[4625]: I1202 13:48:56.880547 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:57 crc kubenswrapper[4625]: E1202 13:48:57.191091 4625 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.153:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d6a21f7998259 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-02 13:48:43.119600217 +0000 UTC m=+279.081777292,LastTimestamp:2025-12-02 13:48:43.119600217 +0000 UTC m=+279.081777292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 02 13:48:57 crc kubenswrapper[4625]: E1202 13:48:57.771873 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:48:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:48:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:48:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-02T13:48:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: E1202 13:48:57.772747 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: E1202 13:48:57.773250 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: E1202 13:48:57.774054 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: E1202 13:48:57.774440 4625 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: E1202 13:48:57.774487 4625 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.776410 4625 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="29452ec8463a202a8cee2904472a9dc8c91cba0a6a1fc517c8bf14f61fcb337a" exitCode=0 Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.776512 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"29452ec8463a202a8cee2904472a9dc8c91cba0a6a1fc517c8bf14f61fcb337a"} Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.776553 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"21ea64df6989a8e98c3f3ca7afae49beef7ebbb6d872625a313dd107e841bf16"} Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.777066 4625 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce86a1bb-e2cd-4867-bf4e-297c2ff9f307" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.777092 4625 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce86a1bb-e2cd-4867-bf4e-297c2ff9f307" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.777357 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.777694 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: E1202 13:48:57.777713 4625 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.777965 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.778243 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.778496 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.778653 4625 status_manager.go:851] "Failed to get status for pod" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" pod="openshift-marketplace/redhat-operators-zwb28" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zwb28\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.778825 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.779236 4625 status_manager.go:851] "Failed to get status for pod" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" pod="openshift-marketplace/redhat-operators-cnz8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-cnz8w\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.779856 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.782165 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.782221 4625 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d" exitCode=1 Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.782272 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d"} Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.782723 4625 scope.go:117] "RemoveContainer" containerID="36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.783221 4625 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.783758 4625 status_manager.go:851] "Failed to get status for pod" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" pod="openshift-marketplace/redhat-operators-zwb28" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zwb28\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.784114 4625 status_manager.go:851] "Failed to get status for pod" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" pod="openshift-marketplace/community-operators-5pmg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-5pmg6\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.784505 4625 status_manager.go:851] "Failed to get status for pod" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" pod="openshift-marketplace/redhat-operators-cnz8w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-cnz8w\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.784809 4625 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.785130 4625 status_manager.go:851] "Failed to get status for pod" podUID="a853ad1c-016e-4012-a450-f79c3585216b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.785471 4625 status_manager.go:851] "Failed to get status for pod" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" pod="openshift-marketplace/redhat-marketplace-lb2rr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lb2rr\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.786017 4625 status_manager.go:851] "Failed to get status for pod" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" pod="openshift-marketplace/certified-operators-ds7zw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ds7zw\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.786918 4625 status_manager.go:851] "Failed to get status for pod" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" pod="openshift-marketplace/certified-operators-ws49j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ws49j\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:57 crc kubenswrapper[4625]: I1202 13:48:57.787267 4625 status_manager.go:851] "Failed to get status for pod" podUID="039b4452-411a-43c5-9823-860c079e5de3" pod="openshift-marketplace/community-operators-r84nn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-r84nn\": dial tcp 38.102.83.153:6443: connect: connection refused" Dec 02 13:48:58 crc kubenswrapper[4625]: I1202 13:48:58.796675 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 13:48:58 crc kubenswrapper[4625]: I1202 13:48:58.797331 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"babd7b5ba60876ad74f6078b547a1c7fc0efd924dd1af6a614398c80b5d9a829"} Dec 02 13:48:58 crc kubenswrapper[4625]: I1202 13:48:58.804041 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5239e65189eb8de982a756a93af9353511256f20db2e7e6191441fe7b7c20b8c"} Dec 02 13:48:58 crc kubenswrapper[4625]: I1202 13:48:58.804125 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9964fdd9c6285d8ce29a75c563678623674ea29bac8d3aff05e6b1ddd13ff9f4"} Dec 02 13:48:59 crc kubenswrapper[4625]: I1202 13:48:59.818091 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d8eeb77407c96379c9a400bc77d0acaddddde11f5fbaa056d770690ede09c787"} Dec 02 13:48:59 crc kubenswrapper[4625]: I1202 13:48:59.818778 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"218b04fdf831275fc75a329aef99816c009ac39ea89da8cb1984cd01a0d0a5b2"} Dec 02 13:48:59 crc kubenswrapper[4625]: I1202 13:48:59.818505 4625 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce86a1bb-e2cd-4867-bf4e-297c2ff9f307" Dec 02 13:48:59 crc kubenswrapper[4625]: I1202 13:48:59.818834 4625 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce86a1bb-e2cd-4867-bf4e-297c2ff9f307" Dec 02 13:48:59 crc kubenswrapper[4625]: I1202 13:48:59.819429 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:48:59 crc kubenswrapper[4625]: I1202 13:48:59.819444 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e38f88bd3c9329a7fd65fe20e192bfb4370597d7972f5f3f590ea3a325be05c"} Dec 02 13:49:01 crc kubenswrapper[4625]: I1202 13:49:01.881289 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:49:01 crc kubenswrapper[4625]: I1202 13:49:01.881757 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:49:01 crc kubenswrapper[4625]: I1202 13:49:01.887165 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:49:03 crc kubenswrapper[4625]: I1202 13:49:03.790106 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:49:05 crc kubenswrapper[4625]: I1202 13:49:05.210827 4625 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:49:05 crc kubenswrapper[4625]: I1202 13:49:05.461329 4625 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c4f0d204-7343-4ec9-b311-84ded2d5cbff" Dec 02 13:49:05 crc kubenswrapper[4625]: I1202 13:49:05.855379 4625 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce86a1bb-e2cd-4867-bf4e-297c2ff9f307" Dec 02 13:49:05 crc kubenswrapper[4625]: I1202 13:49:05.855674 4625 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce86a1bb-e2cd-4867-bf4e-297c2ff9f307" Dec 02 13:49:05 crc kubenswrapper[4625]: I1202 13:49:05.858971 4625 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c4f0d204-7343-4ec9-b311-84ded2d5cbff" Dec 02 13:49:07 crc kubenswrapper[4625]: I1202 13:49:07.775041 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:49:07 crc kubenswrapper[4625]: I1202 13:49:07.775708 4625 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 13:49:07 crc kubenswrapper[4625]: I1202 13:49:07.775793 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 13:49:14 crc kubenswrapper[4625]: I1202 13:49:14.843953 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 02 13:49:15 crc kubenswrapper[4625]: I1202 13:49:15.231582 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 02 13:49:16 crc kubenswrapper[4625]: I1202 13:49:16.169866 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 02 13:49:16 crc kubenswrapper[4625]: I1202 13:49:16.235719 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 02 13:49:16 crc kubenswrapper[4625]: I1202 13:49:16.306119 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 02 13:49:16 crc kubenswrapper[4625]: I1202 13:49:16.472924 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 02 13:49:16 crc kubenswrapper[4625]: I1202 13:49:16.548433 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.192371 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.360665 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.445673 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.739700 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.777396 4625 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.777485 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.835982 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.867208 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.877776 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.913219 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 02 13:49:17 crc kubenswrapper[4625]: I1202 13:49:17.947056 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.074555 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.293841 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.486382 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.547198 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.599462 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.607593 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.645565 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.748010 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.776868 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.793913 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.840395 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.866296 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.881458 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 02 13:49:18 crc kubenswrapper[4625]: I1202 13:49:18.919006 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.017578 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.082065 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.276459 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.348447 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.468016 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.623103 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.650777 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.661300 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.690491 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.762601 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.890239 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 02 13:49:19 crc kubenswrapper[4625]: I1202 13:49:19.906831 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.013584 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.086614 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.170863 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.171238 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.200219 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.387620 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.393551 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.442841 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.597743 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.655192 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.660258 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.818795 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.844875 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.926614 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 02 13:49:20 crc kubenswrapper[4625]: I1202 13:49:20.933517 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.020841 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.097835 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.107232 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.119393 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.153389 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.261339 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.261623 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.297691 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.371894 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.495900 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.591207 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.644484 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.713351 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.819782 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.847445 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.907514 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 02 13:49:21 crc kubenswrapper[4625]: I1202 13:49:21.937880 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.008365 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.035424 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.072051 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.187037 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.258598 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.301739 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.325224 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.359545 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.433615 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.433637 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.506742 4625 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.563637 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.577830 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.596741 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.639175 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.691197 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.713231 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.714703 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.735193 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.981636 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 02 13:49:22 crc kubenswrapper[4625]: I1202 13:49:22.998393 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.025113 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.044790 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.135002 4625 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.159113 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.195688 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.216661 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.251657 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.292522 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.303359 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.305594 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.391785 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.506514 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.556415 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.586354 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.623854 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.682119 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.690755 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.691796 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.696740 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.704293 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.733803 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.763730 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.816775 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.849851 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.883237 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.890897 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.940095 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 02 13:49:23 crc kubenswrapper[4625]: I1202 13:49:23.984121 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.103710 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.124683 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.136595 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.172418 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.313844 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.338038 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.371220 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.372635 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.413768 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.439727 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.513740 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.620644 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.629478 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.658621 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.710076 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.733364 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.740706 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.746568 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.866513 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.958345 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.995332 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 02 13:49:24 crc kubenswrapper[4625]: I1202 13:49:24.995353 4625 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.016039 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.074248 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.259667 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.452488 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.540472 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.595455 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.657867 4625 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.686301 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.724162 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.892477 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.897179 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.934405 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 02 13:49:25 crc kubenswrapper[4625]: I1202 13:49:25.969514 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.004684 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.093453 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.095444 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.220173 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.220371 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.220586 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.220841 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.288976 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.292400 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.303107 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.365811 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.386780 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.407954 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.536281 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.596098 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.610357 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.642268 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.692972 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.704462 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.801861 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.824023 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.851814 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.916381 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.926294 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 02 13:49:26 crc kubenswrapper[4625]: I1202 13:49:26.947537 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.049949 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.063216 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.106637 4625 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.114862 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.114381195 podStartE2EDuration="45.114381195s" podCreationTimestamp="2025-12-02 13:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:49:05.442557309 +0000 UTC m=+301.404734384" watchObservedRunningTime="2025-12-02 13:49:27.114381195 +0000 UTC m=+323.076558270" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.116087 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.119161 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lb2rr","openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.119300 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.119813 4625 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce86a1bb-e2cd-4867-bf4e-297c2ff9f307" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.119848 4625 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ce86a1bb-e2cd-4867-bf4e-297c2ff9f307" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.125244 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.126876 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.143172 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.143147273 podStartE2EDuration="22.143147273s" podCreationTimestamp="2025-12-02 13:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:49:27.142897827 +0000 UTC m=+323.105074912" watchObservedRunningTime="2025-12-02 13:49:27.143147273 +0000 UTC m=+323.105324348" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.168245 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.196487 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.225302 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.279259 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.291287 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.349257 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.353997 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.371697 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.460228 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.520638 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.524037 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.567696 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.569360 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.681745 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.709839 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.725402 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.739816 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.776283 4625 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.776617 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.776704 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.777625 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"babd7b5ba60876ad74f6078b547a1c7fc0efd924dd1af6a614398c80b5d9a829"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.777783 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://babd7b5ba60876ad74f6078b547a1c7fc0efd924dd1af6a614398c80b5d9a829" gracePeriod=30 Dec 02 13:49:27 crc kubenswrapper[4625]: I1202 13:49:27.882609 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.028817 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.064752 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.249542 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.378549 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.479283 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.559561 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.590011 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.628190 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.632462 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.643975 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.696580 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.732996 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.751352 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.769427 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.770139 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.805800 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.858024 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.864774 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" path="/var/lib/kubelet/pods/9ad29b6a-7f18-4ed4-9f10-25f93fecb421/volumes" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.873169 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 02 13:49:28 crc kubenswrapper[4625]: I1202 13:49:28.982026 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 02 13:49:29 crc kubenswrapper[4625]: I1202 13:49:29.097779 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 02 13:49:29 crc kubenswrapper[4625]: I1202 13:49:29.176390 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 02 13:49:29 crc kubenswrapper[4625]: I1202 13:49:29.178919 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 02 13:49:29 crc kubenswrapper[4625]: I1202 13:49:29.182553 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 02 13:49:29 crc kubenswrapper[4625]: I1202 13:49:29.591075 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 02 13:49:29 crc kubenswrapper[4625]: I1202 13:49:29.729227 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 13:49:29 crc kubenswrapper[4625]: I1202 13:49:29.764172 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 02 13:49:29 crc kubenswrapper[4625]: I1202 13:49:29.931737 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.010926 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.014237 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.180722 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.256616 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.401362 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.411193 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.628498 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.742042 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.883663 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.885683 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 02 13:49:30 crc kubenswrapper[4625]: I1202 13:49:30.918700 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 02 13:49:31 crc kubenswrapper[4625]: I1202 13:49:31.086966 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 02 13:49:31 crc kubenswrapper[4625]: I1202 13:49:31.338346 4625 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 02 13:49:31 crc kubenswrapper[4625]: I1202 13:49:31.393164 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 02 13:49:31 crc kubenswrapper[4625]: I1202 13:49:31.530383 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 13:49:32 crc kubenswrapper[4625]: I1202 13:49:32.043601 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 13:49:32 crc kubenswrapper[4625]: I1202 13:49:32.456154 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 02 13:49:32 crc kubenswrapper[4625]: I1202 13:49:32.627593 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 02 13:49:37 crc kubenswrapper[4625]: I1202 13:49:37.977825 4625 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 13:49:37 crc kubenswrapper[4625]: I1202 13:49:37.978999 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9" gracePeriod=5 Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.623706 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.624755 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.762379 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.762461 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.762500 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.762523 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.762521 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.762584 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.762585 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.762611 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.762663 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.763025 4625 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.763045 4625 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.763055 4625 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.763066 4625 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.772196 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:49:43 crc kubenswrapper[4625]: I1202 13:49:43.864205 4625 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.120008 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.120084 4625 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9" exitCode=137 Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.120155 4625 scope.go:117] "RemoveContainer" containerID="9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9" Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.120173 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.155464 4625 scope.go:117] "RemoveContainer" containerID="9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9" Dec 02 13:49:44 crc kubenswrapper[4625]: E1202 13:49:44.167160 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9\": container with ID starting with 9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9 not found: ID does not exist" containerID="9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9" Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.168885 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9"} err="failed to get container status \"9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9\": rpc error: code = NotFound desc = could not find container \"9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9\": container with ID starting with 9abb7cc42b6f95762c9ecdc2759e6656b59d321b5e52c4295a67110ee0a257a9 not found: ID does not exist" Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.866193 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.868622 4625 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.887920 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.888011 4625 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a19a9270-ae4d-4e0b-9c5c-dff3099aa50a" Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.892063 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 02 13:49:44 crc kubenswrapper[4625]: I1202 13:49:44.892138 4625 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="a19a9270-ae4d-4e0b-9c5c-dff3099aa50a" Dec 02 13:49:54 crc kubenswrapper[4625]: I1202 13:49:54.211557 4625 generic.go:334] "Generic (PLEG): container finished" podID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerID="1f06189811257927c555243199630b1596c6d73dcd32d73fefe67438d2d3faaf" exitCode=0 Dec 02 13:49:54 crc kubenswrapper[4625]: I1202 13:49:54.211897 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" event={"ID":"4065d249-ffb1-406a-9e88-b6b97cf70f2a","Type":"ContainerDied","Data":"1f06189811257927c555243199630b1596c6d73dcd32d73fefe67438d2d3faaf"} Dec 02 13:49:54 crc kubenswrapper[4625]: I1202 13:49:54.213448 4625 scope.go:117] "RemoveContainer" containerID="1f06189811257927c555243199630b1596c6d73dcd32d73fefe67438d2d3faaf" Dec 02 13:49:55 crc kubenswrapper[4625]: I1202 13:49:55.221722 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" event={"ID":"4065d249-ffb1-406a-9e88-b6b97cf70f2a","Type":"ContainerStarted","Data":"a8b5b7690802899bbf7f6a18be22c374122e8f664e0ae4e4911c46cf73ed43f2"} Dec 02 13:49:55 crc kubenswrapper[4625]: I1202 13:49:55.222684 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:49:55 crc kubenswrapper[4625]: I1202 13:49:55.225121 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:49:57 crc kubenswrapper[4625]: I1202 13:49:57.100393 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 02 13:49:58 crc kubenswrapper[4625]: I1202 13:49:58.243858 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 02 13:49:58 crc kubenswrapper[4625]: I1202 13:49:58.245909 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 02 13:49:58 crc kubenswrapper[4625]: I1202 13:49:58.245986 4625 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="babd7b5ba60876ad74f6078b547a1c7fc0efd924dd1af6a614398c80b5d9a829" exitCode=137 Dec 02 13:49:58 crc kubenswrapper[4625]: I1202 13:49:58.246031 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"babd7b5ba60876ad74f6078b547a1c7fc0efd924dd1af6a614398c80b5d9a829"} Dec 02 13:49:58 crc kubenswrapper[4625]: I1202 13:49:58.246086 4625 scope.go:117] "RemoveContainer" containerID="36960a51d1cab02d06637f324a593b55fc0d8738656323a8aadab75a8dcf3b3d" Dec 02 13:49:59 crc kubenswrapper[4625]: I1202 13:49:59.256562 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 02 13:49:59 crc kubenswrapper[4625]: I1202 13:49:59.257948 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"448c4bd0f20aada48eab79a21f1b263450ee63fc23c0c62229746d2d211f6aa1"} Dec 02 13:50:01 crc kubenswrapper[4625]: I1202 13:50:01.359260 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 02 13:50:03 crc kubenswrapper[4625]: I1202 13:50:03.789767 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:50:07 crc kubenswrapper[4625]: I1202 13:50:07.775510 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:50:07 crc kubenswrapper[4625]: I1202 13:50:07.779976 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:50:08 crc kubenswrapper[4625]: I1202 13:50:08.324044 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.179604 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m"] Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.180912 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" podUID="1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" containerName="route-controller-manager" containerID="cri-o://6cf0d7911954d2657c932d9237dbc66399faa07da992a20a3059ec7590458599" gracePeriod=30 Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.185104 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rxs7k"] Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.185492 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" podUID="875633b5-d52b-4d18-9322-dfbe2d73aed4" containerName="controller-manager" containerID="cri-o://89b44ad264f6673ce9311ec01ae81859911e730706f0946f2b4193981ad0ff1b" gracePeriod=30 Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.371592 4625 generic.go:334] "Generic (PLEG): container finished" podID="875633b5-d52b-4d18-9322-dfbe2d73aed4" containerID="89b44ad264f6673ce9311ec01ae81859911e730706f0946f2b4193981ad0ff1b" exitCode=0 Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.371668 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" event={"ID":"875633b5-d52b-4d18-9322-dfbe2d73aed4","Type":"ContainerDied","Data":"89b44ad264f6673ce9311ec01ae81859911e730706f0946f2b4193981ad0ff1b"} Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.377930 4625 generic.go:334] "Generic (PLEG): container finished" podID="1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" containerID="6cf0d7911954d2657c932d9237dbc66399faa07da992a20a3059ec7590458599" exitCode=0 Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.378002 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" event={"ID":"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb","Type":"ContainerDied","Data":"6cf0d7911954d2657c932d9237dbc66399faa07da992a20a3059ec7590458599"} Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.636769 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.709594 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.786542 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-client-ca\") pod \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.786594 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-config\") pod \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.786656 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-serving-cert\") pod \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.786721 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-client-ca\") pod \"875633b5-d52b-4d18-9322-dfbe2d73aed4\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.786794 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/875633b5-d52b-4d18-9322-dfbe2d73aed4-serving-cert\") pod \"875633b5-d52b-4d18-9322-dfbe2d73aed4\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.786822 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-proxy-ca-bundles\") pod \"875633b5-d52b-4d18-9322-dfbe2d73aed4\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.786950 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtdsd\" (UniqueName: \"kubernetes.io/projected/875633b5-d52b-4d18-9322-dfbe2d73aed4-kube-api-access-xtdsd\") pod \"875633b5-d52b-4d18-9322-dfbe2d73aed4\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.787016 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-config\") pod \"875633b5-d52b-4d18-9322-dfbe2d73aed4\" (UID: \"875633b5-d52b-4d18-9322-dfbe2d73aed4\") " Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.787062 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59r8c\" (UniqueName: \"kubernetes.io/projected/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-kube-api-access-59r8c\") pod \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\" (UID: \"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb\") " Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.787651 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-client-ca" (OuterVolumeSpecName: "client-ca") pod "875633b5-d52b-4d18-9322-dfbe2d73aed4" (UID: "875633b5-d52b-4d18-9322-dfbe2d73aed4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.787663 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "875633b5-d52b-4d18-9322-dfbe2d73aed4" (UID: "875633b5-d52b-4d18-9322-dfbe2d73aed4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.788146 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-config" (OuterVolumeSpecName: "config") pod "875633b5-d52b-4d18-9322-dfbe2d73aed4" (UID: "875633b5-d52b-4d18-9322-dfbe2d73aed4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.788834 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-client-ca" (OuterVolumeSpecName: "client-ca") pod "1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" (UID: "1e0bdd20-db2f-4cc8-b939-5ccb65599bbb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.788890 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-config" (OuterVolumeSpecName: "config") pod "1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" (UID: "1e0bdd20-db2f-4cc8-b939-5ccb65599bbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.794821 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875633b5-d52b-4d18-9322-dfbe2d73aed4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "875633b5-d52b-4d18-9322-dfbe2d73aed4" (UID: "875633b5-d52b-4d18-9322-dfbe2d73aed4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.795176 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875633b5-d52b-4d18-9322-dfbe2d73aed4-kube-api-access-xtdsd" (OuterVolumeSpecName: "kube-api-access-xtdsd") pod "875633b5-d52b-4d18-9322-dfbe2d73aed4" (UID: "875633b5-d52b-4d18-9322-dfbe2d73aed4"). InnerVolumeSpecName "kube-api-access-xtdsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.797170 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" (UID: "1e0bdd20-db2f-4cc8-b939-5ccb65599bbb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.797683 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-kube-api-access-59r8c" (OuterVolumeSpecName: "kube-api-access-59r8c") pod "1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" (UID: "1e0bdd20-db2f-4cc8-b939-5ccb65599bbb"). InnerVolumeSpecName "kube-api-access-59r8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.888656 4625 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.888707 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/875633b5-d52b-4d18-9322-dfbe2d73aed4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.888721 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtdsd\" (UniqueName: \"kubernetes.io/projected/875633b5-d52b-4d18-9322-dfbe2d73aed4-kube-api-access-xtdsd\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.888739 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.888752 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59r8c\" (UniqueName: \"kubernetes.io/projected/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-kube-api-access-59r8c\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.888768 4625 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.888781 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.888792 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:15 crc kubenswrapper[4625]: I1202 13:50:15.888801 4625 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/875633b5-d52b-4d18-9322-dfbe2d73aed4-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.386112 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.394611 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m" event={"ID":"1e0bdd20-db2f-4cc8-b939-5ccb65599bbb","Type":"ContainerDied","Data":"d17b21b4c15d0a028734d04c0267748d3eb0c7fc758c916a378a70c8dbf7e4de"} Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.394669 4625 scope.go:117] "RemoveContainer" containerID="6cf0d7911954d2657c932d9237dbc66399faa07da992a20a3059ec7590458599" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.399489 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" event={"ID":"875633b5-d52b-4d18-9322-dfbe2d73aed4","Type":"ContainerDied","Data":"4c72f9a530c7effc0e12932c35b4c10d727a8bb160d812d16062eaaa30744c4a"} Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.399595 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rxs7k" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.453168 4625 scope.go:117] "RemoveContainer" containerID="89b44ad264f6673ce9311ec01ae81859911e730706f0946f2b4193981ad0ff1b" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.468616 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m"] Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.477661 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ctx7m"] Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.488115 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rxs7k"] Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.495184 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rxs7k"] Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.867516 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" path="/var/lib/kubelet/pods/1e0bdd20-db2f-4cc8-b939-5ccb65599bbb/volumes" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.868846 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="875633b5-d52b-4d18-9322-dfbe2d73aed4" path="/var/lib/kubelet/pods/875633b5-d52b-4d18-9322-dfbe2d73aed4/volumes" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.965442 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7589d8b788-r792w"] Dec 02 13:50:16 crc kubenswrapper[4625]: E1202 13:50:16.965837 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerName="extract-content" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.965856 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerName="extract-content" Dec 02 13:50:16 crc kubenswrapper[4625]: E1202 13:50:16.965866 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.965873 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 13:50:16 crc kubenswrapper[4625]: E1202 13:50:16.965892 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerName="extract-utilities" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.965900 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerName="extract-utilities" Dec 02 13:50:16 crc kubenswrapper[4625]: E1202 13:50:16.965908 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerName="registry-server" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.965915 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerName="registry-server" Dec 02 13:50:16 crc kubenswrapper[4625]: E1202 13:50:16.965927 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875633b5-d52b-4d18-9322-dfbe2d73aed4" containerName="controller-manager" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.965939 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="875633b5-d52b-4d18-9322-dfbe2d73aed4" containerName="controller-manager" Dec 02 13:50:16 crc kubenswrapper[4625]: E1202 13:50:16.965953 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" containerName="route-controller-manager" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.965962 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" containerName="route-controller-manager" Dec 02 13:50:16 crc kubenswrapper[4625]: E1202 13:50:16.965975 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a853ad1c-016e-4012-a450-f79c3585216b" containerName="installer" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.965985 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="a853ad1c-016e-4012-a450-f79c3585216b" containerName="installer" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.966111 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.966126 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="a853ad1c-016e-4012-a450-f79c3585216b" containerName="installer" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.966156 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad29b6a-7f18-4ed4-9f10-25f93fecb421" containerName="registry-server" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.966166 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="875633b5-d52b-4d18-9322-dfbe2d73aed4" containerName="controller-manager" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.966180 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0bdd20-db2f-4cc8-b939-5ccb65599bbb" containerName="route-controller-manager" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.966844 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.969241 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7"] Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.970359 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.977221 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.977262 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.977321 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.978152 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.978276 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.978338 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.978897 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.978976 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.979213 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.979359 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.979398 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.980140 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.987090 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 13:50:16 crc kubenswrapper[4625]: I1202 13:50:16.995189 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7589d8b788-r792w"] Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.029604 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7"] Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.109484 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-config\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.109548 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-client-ca\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.109578 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-config\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.109609 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-client-ca\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.109674 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63b5186-88bc-4189-b702-61ad103b655d-serving-cert\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.109701 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f07e5125-1c67-453b-a03e-c3c3036d7b19-serving-cert\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.109769 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-proxy-ca-bundles\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.109801 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb579\" (UniqueName: \"kubernetes.io/projected/d63b5186-88bc-4189-b702-61ad103b655d-kube-api-access-pb579\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.109836 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5rfj\" (UniqueName: \"kubernetes.io/projected/f07e5125-1c67-453b-a03e-c3c3036d7b19-kube-api-access-l5rfj\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.211675 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-proxy-ca-bundles\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.211734 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb579\" (UniqueName: \"kubernetes.io/projected/d63b5186-88bc-4189-b702-61ad103b655d-kube-api-access-pb579\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.211770 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rfj\" (UniqueName: \"kubernetes.io/projected/f07e5125-1c67-453b-a03e-c3c3036d7b19-kube-api-access-l5rfj\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.211813 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-config\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.212007 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-client-ca\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.212202 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-config\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.212333 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-client-ca\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.212554 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63b5186-88bc-4189-b702-61ad103b655d-serving-cert\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.212629 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f07e5125-1c67-453b-a03e-c3c3036d7b19-serving-cert\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.213087 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-client-ca\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.213250 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-proxy-ca-bundles\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.213418 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-config\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.213765 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-client-ca\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.214409 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-config\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.218843 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f07e5125-1c67-453b-a03e-c3c3036d7b19-serving-cert\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.223942 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63b5186-88bc-4189-b702-61ad103b655d-serving-cert\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.233391 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rfj\" (UniqueName: \"kubernetes.io/projected/f07e5125-1c67-453b-a03e-c3c3036d7b19-kube-api-access-l5rfj\") pod \"controller-manager-7589d8b788-r792w\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.236600 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb579\" (UniqueName: \"kubernetes.io/projected/d63b5186-88bc-4189-b702-61ad103b655d-kube-api-access-pb579\") pod \"route-controller-manager-7c4fcbccf6-nvst7\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.286799 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.299384 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.593889 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7589d8b788-r792w"] Dec 02 13:50:17 crc kubenswrapper[4625]: I1202 13:50:17.648947 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7"] Dec 02 13:50:17 crc kubenswrapper[4625]: W1202 13:50:17.665395 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd63b5186_88bc_4189_b702_61ad103b655d.slice/crio-729a9e6d2c190fe3d9aeb110b689cbc154714adc63b13d37497605150f29fa8e WatchSource:0}: Error finding container 729a9e6d2c190fe3d9aeb110b689cbc154714adc63b13d37497605150f29fa8e: Status 404 returned error can't find the container with id 729a9e6d2c190fe3d9aeb110b689cbc154714adc63b13d37497605150f29fa8e Dec 02 13:50:18 crc kubenswrapper[4625]: I1202 13:50:18.454780 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" event={"ID":"d63b5186-88bc-4189-b702-61ad103b655d","Type":"ContainerStarted","Data":"d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7"} Dec 02 13:50:18 crc kubenswrapper[4625]: I1202 13:50:18.455189 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" event={"ID":"d63b5186-88bc-4189-b702-61ad103b655d","Type":"ContainerStarted","Data":"729a9e6d2c190fe3d9aeb110b689cbc154714adc63b13d37497605150f29fa8e"} Dec 02 13:50:18 crc kubenswrapper[4625]: I1202 13:50:18.455212 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:18 crc kubenswrapper[4625]: I1202 13:50:18.458388 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" event={"ID":"f07e5125-1c67-453b-a03e-c3c3036d7b19","Type":"ContainerStarted","Data":"fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9"} Dec 02 13:50:18 crc kubenswrapper[4625]: I1202 13:50:18.458612 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:18 crc kubenswrapper[4625]: I1202 13:50:18.458740 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" event={"ID":"f07e5125-1c67-453b-a03e-c3c3036d7b19","Type":"ContainerStarted","Data":"abdfe3e9f8373f8447173c859e738ca88600b814ed0b9b1299120ed5151a46d6"} Dec 02 13:50:18 crc kubenswrapper[4625]: I1202 13:50:18.464619 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:18 crc kubenswrapper[4625]: I1202 13:50:18.464977 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:18 crc kubenswrapper[4625]: I1202 13:50:18.485855 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" podStartSLOduration=3.485828032 podStartE2EDuration="3.485828032s" podCreationTimestamp="2025-12-02 13:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:50:18.483254141 +0000 UTC m=+374.445431216" watchObservedRunningTime="2025-12-02 13:50:18.485828032 +0000 UTC m=+374.448005107" Dec 02 13:50:18 crc kubenswrapper[4625]: I1202 13:50:18.505416 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" podStartSLOduration=3.505387969 podStartE2EDuration="3.505387969s" podCreationTimestamp="2025-12-02 13:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:50:18.503292891 +0000 UTC m=+374.465469966" watchObservedRunningTime="2025-12-02 13:50:18.505387969 +0000 UTC m=+374.467565044" Dec 02 13:50:19 crc kubenswrapper[4625]: I1202 13:50:19.271694 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:50:19 crc kubenswrapper[4625]: I1202 13:50:19.271803 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:50:33 crc kubenswrapper[4625]: I1202 13:50:33.416603 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7589d8b788-r792w"] Dec 02 13:50:33 crc kubenswrapper[4625]: I1202 13:50:33.417704 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" podUID="f07e5125-1c67-453b-a03e-c3c3036d7b19" containerName="controller-manager" containerID="cri-o://fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9" gracePeriod=30 Dec 02 13:50:33 crc kubenswrapper[4625]: I1202 13:50:33.464219 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7"] Dec 02 13:50:33 crc kubenswrapper[4625]: I1202 13:50:33.464701 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" podUID="d63b5186-88bc-4189-b702-61ad103b655d" containerName="route-controller-manager" containerID="cri-o://d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7" gracePeriod=30 Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.064986 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.070921 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.198855 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-client-ca\") pod \"f07e5125-1c67-453b-a03e-c3c3036d7b19\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.198928 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-client-ca\") pod \"d63b5186-88bc-4189-b702-61ad103b655d\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.198968 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb579\" (UniqueName: \"kubernetes.io/projected/d63b5186-88bc-4189-b702-61ad103b655d-kube-api-access-pb579\") pod \"d63b5186-88bc-4189-b702-61ad103b655d\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.199028 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-config\") pod \"d63b5186-88bc-4189-b702-61ad103b655d\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.199135 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-proxy-ca-bundles\") pod \"f07e5125-1c67-453b-a03e-c3c3036d7b19\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.199199 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5rfj\" (UniqueName: \"kubernetes.io/projected/f07e5125-1c67-453b-a03e-c3c3036d7b19-kube-api-access-l5rfj\") pod \"f07e5125-1c67-453b-a03e-c3c3036d7b19\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.199231 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f07e5125-1c67-453b-a03e-c3c3036d7b19-serving-cert\") pod \"f07e5125-1c67-453b-a03e-c3c3036d7b19\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.199249 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-config\") pod \"f07e5125-1c67-453b-a03e-c3c3036d7b19\" (UID: \"f07e5125-1c67-453b-a03e-c3c3036d7b19\") " Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.199269 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63b5186-88bc-4189-b702-61ad103b655d-serving-cert\") pod \"d63b5186-88bc-4189-b702-61ad103b655d\" (UID: \"d63b5186-88bc-4189-b702-61ad103b655d\") " Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.200116 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-client-ca" (OuterVolumeSpecName: "client-ca") pod "f07e5125-1c67-453b-a03e-c3c3036d7b19" (UID: "f07e5125-1c67-453b-a03e-c3c3036d7b19"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.200380 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f07e5125-1c67-453b-a03e-c3c3036d7b19" (UID: "f07e5125-1c67-453b-a03e-c3c3036d7b19"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.201143 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-client-ca" (OuterVolumeSpecName: "client-ca") pod "d63b5186-88bc-4189-b702-61ad103b655d" (UID: "d63b5186-88bc-4189-b702-61ad103b655d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.201182 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-config" (OuterVolumeSpecName: "config") pod "d63b5186-88bc-4189-b702-61ad103b655d" (UID: "d63b5186-88bc-4189-b702-61ad103b655d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.201202 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-config" (OuterVolumeSpecName: "config") pod "f07e5125-1c67-453b-a03e-c3c3036d7b19" (UID: "f07e5125-1c67-453b-a03e-c3c3036d7b19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.206485 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f07e5125-1c67-453b-a03e-c3c3036d7b19-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f07e5125-1c67-453b-a03e-c3c3036d7b19" (UID: "f07e5125-1c67-453b-a03e-c3c3036d7b19"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.206520 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63b5186-88bc-4189-b702-61ad103b655d-kube-api-access-pb579" (OuterVolumeSpecName: "kube-api-access-pb579") pod "d63b5186-88bc-4189-b702-61ad103b655d" (UID: "d63b5186-88bc-4189-b702-61ad103b655d"). InnerVolumeSpecName "kube-api-access-pb579". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.206567 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d63b5186-88bc-4189-b702-61ad103b655d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d63b5186-88bc-4189-b702-61ad103b655d" (UID: "d63b5186-88bc-4189-b702-61ad103b655d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.206612 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07e5125-1c67-453b-a03e-c3c3036d7b19-kube-api-access-l5rfj" (OuterVolumeSpecName: "kube-api-access-l5rfj") pod "f07e5125-1c67-453b-a03e-c3c3036d7b19" (UID: "f07e5125-1c67-453b-a03e-c3c3036d7b19"). InnerVolumeSpecName "kube-api-access-l5rfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.300871 4625 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.300918 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5rfj\" (UniqueName: \"kubernetes.io/projected/f07e5125-1c67-453b-a03e-c3c3036d7b19-kube-api-access-l5rfj\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.300934 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.300943 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f07e5125-1c67-453b-a03e-c3c3036d7b19-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.300952 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63b5186-88bc-4189-b702-61ad103b655d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.300960 4625 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f07e5125-1c67-453b-a03e-c3c3036d7b19-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.300969 4625 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.300979 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb579\" (UniqueName: \"kubernetes.io/projected/d63b5186-88bc-4189-b702-61ad103b655d-kube-api-access-pb579\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.300988 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63b5186-88bc-4189-b702-61ad103b655d-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.389860 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cc687dd9c-k669l"] Dec 02 13:50:34 crc kubenswrapper[4625]: E1202 13:50:34.390148 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63b5186-88bc-4189-b702-61ad103b655d" containerName="route-controller-manager" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.390166 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63b5186-88bc-4189-b702-61ad103b655d" containerName="route-controller-manager" Dec 02 13:50:34 crc kubenswrapper[4625]: E1202 13:50:34.390190 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07e5125-1c67-453b-a03e-c3c3036d7b19" containerName="controller-manager" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.390200 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07e5125-1c67-453b-a03e-c3c3036d7b19" containerName="controller-manager" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.390325 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07e5125-1c67-453b-a03e-c3c3036d7b19" containerName="controller-manager" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.390345 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63b5186-88bc-4189-b702-61ad103b655d" containerName="route-controller-manager" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.390827 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.422072 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cc687dd9c-k669l"] Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.431429 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh"] Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.432208 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.481075 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh"] Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.535343 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-serving-cert\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.535421 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxgc\" (UniqueName: \"kubernetes.io/projected/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-kube-api-access-8kxgc\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.535476 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-client-ca\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.535509 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9f9646-7f20-40d0-82f0-540de416aa5e-serving-cert\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.535566 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-config\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.535592 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-client-ca\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.535709 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-config\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.535736 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-proxy-ca-bundles\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.535768 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58cl\" (UniqueName: \"kubernetes.io/projected/ef9f9646-7f20-40d0-82f0-540de416aa5e-kube-api-access-x58cl\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.558941 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ws49j"] Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.560235 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ws49j" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerName="registry-server" containerID="cri-o://787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8" gracePeriod=2 Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.567769 4625 generic.go:334] "Generic (PLEG): container finished" podID="d63b5186-88bc-4189-b702-61ad103b655d" containerID="d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7" exitCode=0 Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.567863 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" event={"ID":"d63b5186-88bc-4189-b702-61ad103b655d","Type":"ContainerDied","Data":"d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7"} Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.567924 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" event={"ID":"d63b5186-88bc-4189-b702-61ad103b655d","Type":"ContainerDied","Data":"729a9e6d2c190fe3d9aeb110b689cbc154714adc63b13d37497605150f29fa8e"} Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.567942 4625 scope.go:117] "RemoveContainer" containerID="d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.567942 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.572128 4625 generic.go:334] "Generic (PLEG): container finished" podID="f07e5125-1c67-453b-a03e-c3c3036d7b19" containerID="fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9" exitCode=0 Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.572201 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" event={"ID":"f07e5125-1c67-453b-a03e-c3c3036d7b19","Type":"ContainerDied","Data":"fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9"} Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.572234 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" event={"ID":"f07e5125-1c67-453b-a03e-c3c3036d7b19","Type":"ContainerDied","Data":"abdfe3e9f8373f8447173c859e738ca88600b814ed0b9b1299120ed5151a46d6"} Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.572402 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7589d8b788-r792w" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.590964 4625 scope.go:117] "RemoveContainer" containerID="d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7" Dec 02 13:50:34 crc kubenswrapper[4625]: E1202 13:50:34.591597 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7\": container with ID starting with d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7 not found: ID does not exist" containerID="d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.591672 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7"} err="failed to get container status \"d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7\": rpc error: code = NotFound desc = could not find container \"d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7\": container with ID starting with d51a132491b17e725e018f1b7fcfb027a5b32a893522c7bf76f494fc825fd9e7 not found: ID does not exist" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.591723 4625 scope.go:117] "RemoveContainer" containerID="fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.617072 4625 scope.go:117] "RemoveContainer" containerID="fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9" Dec 02 13:50:34 crc kubenswrapper[4625]: E1202 13:50:34.617827 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9\": container with ID starting with fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9 not found: ID does not exist" containerID="fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.617882 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9"} err="failed to get container status \"fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9\": rpc error: code = NotFound desc = could not find container \"fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9\": container with ID starting with fcaf8bece42a5421830de4c8cfa5b0a4a60be696ae3f2ee9a79ee27de88419d9 not found: ID does not exist" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.626820 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7"] Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.636780 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-client-ca\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.636835 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9f9646-7f20-40d0-82f0-540de416aa5e-serving-cert\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.636860 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-config\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.636887 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-client-ca\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.636925 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-config\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.636946 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-proxy-ca-bundles\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.636985 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x58cl\" (UniqueName: \"kubernetes.io/projected/ef9f9646-7f20-40d0-82f0-540de416aa5e-kube-api-access-x58cl\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.637020 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-serving-cert\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.637038 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxgc\" (UniqueName: \"kubernetes.io/projected/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-kube-api-access-8kxgc\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.638043 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4fcbccf6-nvst7"] Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.638880 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-client-ca\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.639102 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-config\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.640103 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-proxy-ca-bundles\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.641909 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-client-ca\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.642089 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-config\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.642762 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9f9646-7f20-40d0-82f0-540de416aa5e-serving-cert\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.644707 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-serving-cert\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.647663 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7589d8b788-r792w"] Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.653929 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7589d8b788-r792w"] Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.659368 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxgc\" (UniqueName: \"kubernetes.io/projected/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-kube-api-access-8kxgc\") pod \"route-controller-manager-c669855c-kqjjh\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.664372 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x58cl\" (UniqueName: \"kubernetes.io/projected/ef9f9646-7f20-40d0-82f0-540de416aa5e-kube-api-access-x58cl\") pod \"controller-manager-7cc687dd9c-k669l\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.708586 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.782996 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.891388 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d63b5186-88bc-4189-b702-61ad103b655d" path="/var/lib/kubelet/pods/d63b5186-88bc-4189-b702-61ad103b655d/volumes" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.892215 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07e5125-1c67-453b-a03e-c3c3036d7b19" path="/var/lib/kubelet/pods/f07e5125-1c67-453b-a03e-c3c3036d7b19/volumes" Dec 02 13:50:34 crc kubenswrapper[4625]: I1202 13:50:34.968256 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.029844 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cc687dd9c-k669l"] Dec 02 13:50:35 crc kubenswrapper[4625]: W1202 13:50:35.035489 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef9f9646_7f20_40d0_82f0_540de416aa5e.slice/crio-cb37af77943ee7b1b8edef93e18644920306b1427a41648c98db21be694f43da WatchSource:0}: Error finding container cb37af77943ee7b1b8edef93e18644920306b1427a41648c98db21be694f43da: Status 404 returned error can't find the container with id cb37af77943ee7b1b8edef93e18644920306b1427a41648c98db21be694f43da Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.154444 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4tc2\" (UniqueName: \"kubernetes.io/projected/e24375bb-53a2-4ee7-992e-4d57c2293536-kube-api-access-r4tc2\") pod \"e24375bb-53a2-4ee7-992e-4d57c2293536\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.154518 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-utilities\") pod \"e24375bb-53a2-4ee7-992e-4d57c2293536\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.154578 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-catalog-content\") pod \"e24375bb-53a2-4ee7-992e-4d57c2293536\" (UID: \"e24375bb-53a2-4ee7-992e-4d57c2293536\") " Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.156203 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-utilities" (OuterVolumeSpecName: "utilities") pod "e24375bb-53a2-4ee7-992e-4d57c2293536" (UID: "e24375bb-53a2-4ee7-992e-4d57c2293536"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.161414 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24375bb-53a2-4ee7-992e-4d57c2293536-kube-api-access-r4tc2" (OuterVolumeSpecName: "kube-api-access-r4tc2") pod "e24375bb-53a2-4ee7-992e-4d57c2293536" (UID: "e24375bb-53a2-4ee7-992e-4d57c2293536"). InnerVolumeSpecName "kube-api-access-r4tc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.229836 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e24375bb-53a2-4ee7-992e-4d57c2293536" (UID: "e24375bb-53a2-4ee7-992e-4d57c2293536"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.256302 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4tc2\" (UniqueName: \"kubernetes.io/projected/e24375bb-53a2-4ee7-992e-4d57c2293536-kube-api-access-r4tc2\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.256381 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.256400 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e24375bb-53a2-4ee7-992e-4d57c2293536-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.311252 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh"] Dec 02 13:50:35 crc kubenswrapper[4625]: W1202 13:50:35.315476 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f6ac22c_1b16_4c94_9616_4710e3b0ec50.slice/crio-ece8285ab8eab29a93ecde6f2937813f6a7d1c7ead429baadb51d13f13d99956 WatchSource:0}: Error finding container ece8285ab8eab29a93ecde6f2937813f6a7d1c7ead429baadb51d13f13d99956: Status 404 returned error can't find the container with id ece8285ab8eab29a93ecde6f2937813f6a7d1c7ead429baadb51d13f13d99956 Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.586092 4625 generic.go:334] "Generic (PLEG): container finished" podID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerID="787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8" exitCode=0 Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.586151 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ws49j" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.586168 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws49j" event={"ID":"e24375bb-53a2-4ee7-992e-4d57c2293536","Type":"ContainerDied","Data":"787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8"} Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.587814 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws49j" event={"ID":"e24375bb-53a2-4ee7-992e-4d57c2293536","Type":"ContainerDied","Data":"ba41552936fd11b0bc3aab1eb915bf35e8f780f0f4fa48df35a693d2fa0c1c60"} Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.587831 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" event={"ID":"4f6ac22c-1b16-4c94-9616-4710e3b0ec50","Type":"ContainerStarted","Data":"c30c2cc3a28d10eb5d40d84b9227cde60defa010a383f3e7715c98e7c2fa6913"} Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.587842 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" event={"ID":"4f6ac22c-1b16-4c94-9616-4710e3b0ec50","Type":"ContainerStarted","Data":"ece8285ab8eab29a93ecde6f2937813f6a7d1c7ead429baadb51d13f13d99956"} Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.587864 4625 scope.go:117] "RemoveContainer" containerID="787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.588787 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.591180 4625 patch_prober.go:28] interesting pod/route-controller-manager-c669855c-kqjjh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.591223 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" podUID="4f6ac22c-1b16-4c94-9616-4710e3b0ec50" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.600967 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" event={"ID":"ef9f9646-7f20-40d0-82f0-540de416aa5e","Type":"ContainerStarted","Data":"b527ab965cb23f05a3e7e6b47cc7e9a63601d8e9a53f05addbc24b7ec92cb690"} Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.601005 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" event={"ID":"ef9f9646-7f20-40d0-82f0-540de416aa5e","Type":"ContainerStarted","Data":"cb37af77943ee7b1b8edef93e18644920306b1427a41648c98db21be694f43da"} Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.602156 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.613174 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.619170 4625 scope.go:117] "RemoveContainer" containerID="bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.627332 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" podStartSLOduration=1.627302415 podStartE2EDuration="1.627302415s" podCreationTimestamp="2025-12-02 13:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:50:35.624987252 +0000 UTC m=+391.587164327" watchObservedRunningTime="2025-12-02 13:50:35.627302415 +0000 UTC m=+391.589479490" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.649011 4625 scope.go:117] "RemoveContainer" containerID="4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.657608 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" podStartSLOduration=1.657594226 podStartE2EDuration="1.657594226s" podCreationTimestamp="2025-12-02 13:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:50:35.656159327 +0000 UTC m=+391.618336402" watchObservedRunningTime="2025-12-02 13:50:35.657594226 +0000 UTC m=+391.619771301" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.666111 4625 scope.go:117] "RemoveContainer" containerID="787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8" Dec 02 13:50:35 crc kubenswrapper[4625]: E1202 13:50:35.666743 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8\": container with ID starting with 787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8 not found: ID does not exist" containerID="787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.667459 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8"} err="failed to get container status \"787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8\": rpc error: code = NotFound desc = could not find container \"787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8\": container with ID starting with 787c8bc7e50f3f0b5c35fd2ed346849f13aeec67fcb3b2bc04b6ce4c535c01e8 not found: ID does not exist" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.667501 4625 scope.go:117] "RemoveContainer" containerID="bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587" Dec 02 13:50:35 crc kubenswrapper[4625]: E1202 13:50:35.667862 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587\": container with ID starting with bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587 not found: ID does not exist" containerID="bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.667913 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587"} err="failed to get container status \"bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587\": rpc error: code = NotFound desc = could not find container \"bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587\": container with ID starting with bb0b8e1beb51bb283496479f376349821cac573e387ab1261a8c6265b6a73587 not found: ID does not exist" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.667950 4625 scope.go:117] "RemoveContainer" containerID="4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef" Dec 02 13:50:35 crc kubenswrapper[4625]: E1202 13:50:35.668230 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef\": container with ID starting with 4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef not found: ID does not exist" containerID="4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.668251 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef"} err="failed to get container status \"4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef\": rpc error: code = NotFound desc = could not find container \"4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef\": container with ID starting with 4e428abbc8757011b0e2a59a6a2cc9e645dccf34f1f893445c43c747c4717fef not found: ID does not exist" Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.709036 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ws49j"] Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.717387 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ws49j"] Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.727469 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5pmg6"] Dec 02 13:50:35 crc kubenswrapper[4625]: I1202 13:50:35.728182 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5pmg6" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" containerName="registry-server" containerID="cri-o://bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36" gracePeriod=2 Dec 02 13:50:36 crc kubenswrapper[4625]: E1202 13:50:36.185976 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36 is running failed: container process not found" containerID="bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 13:50:36 crc kubenswrapper[4625]: E1202 13:50:36.187109 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36 is running failed: container process not found" containerID="bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 13:50:36 crc kubenswrapper[4625]: E1202 13:50:36.187746 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36 is running failed: container process not found" containerID="bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36" cmd=["grpc_health_probe","-addr=:50051"] Dec 02 13:50:36 crc kubenswrapper[4625]: E1202 13:50:36.187784 4625 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-5pmg6" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" containerName="registry-server" Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.615876 4625 generic.go:334] "Generic (PLEG): container finished" podID="36ff365b-030a-4ee4-9819-c5c41464213d" containerID="bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36" exitCode=0 Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.616006 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pmg6" event={"ID":"36ff365b-030a-4ee4-9819-c5c41464213d","Type":"ContainerDied","Data":"bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36"} Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.654627 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.791341 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.867644 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" path="/var/lib/kubelet/pods/e24375bb-53a2-4ee7-992e-4d57c2293536/volumes" Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.879602 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-utilities\") pod \"36ff365b-030a-4ee4-9819-c5c41464213d\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.879663 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-catalog-content\") pod \"36ff365b-030a-4ee4-9819-c5c41464213d\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.879834 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmlvp\" (UniqueName: \"kubernetes.io/projected/36ff365b-030a-4ee4-9819-c5c41464213d-kube-api-access-hmlvp\") pod \"36ff365b-030a-4ee4-9819-c5c41464213d\" (UID: \"36ff365b-030a-4ee4-9819-c5c41464213d\") " Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.880763 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-utilities" (OuterVolumeSpecName: "utilities") pod "36ff365b-030a-4ee4-9819-c5c41464213d" (UID: "36ff365b-030a-4ee4-9819-c5c41464213d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.898538 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ff365b-030a-4ee4-9819-c5c41464213d-kube-api-access-hmlvp" (OuterVolumeSpecName: "kube-api-access-hmlvp") pod "36ff365b-030a-4ee4-9819-c5c41464213d" (UID: "36ff365b-030a-4ee4-9819-c5c41464213d"). InnerVolumeSpecName "kube-api-access-hmlvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.938651 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36ff365b-030a-4ee4-9819-c5c41464213d" (UID: "36ff365b-030a-4ee4-9819-c5c41464213d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.981234 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmlvp\" (UniqueName: \"kubernetes.io/projected/36ff365b-030a-4ee4-9819-c5c41464213d-kube-api-access-hmlvp\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.982378 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:36 crc kubenswrapper[4625]: I1202 13:50:36.982393 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36ff365b-030a-4ee4-9819-c5c41464213d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:37 crc kubenswrapper[4625]: I1202 13:50:37.633035 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pmg6" Dec 02 13:50:37 crc kubenswrapper[4625]: I1202 13:50:37.633408 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pmg6" event={"ID":"36ff365b-030a-4ee4-9819-c5c41464213d","Type":"ContainerDied","Data":"c8d68bb9863638a7fc24b28868886233b79a7b45e8f2b376a93ebbb35122e632"} Dec 02 13:50:37 crc kubenswrapper[4625]: I1202 13:50:37.633564 4625 scope.go:117] "RemoveContainer" containerID="bdd9ae860a41baf1c2648459ff46a769b8fd8b71490002436786bafa821c2d36" Dec 02 13:50:37 crc kubenswrapper[4625]: I1202 13:50:37.655366 4625 scope.go:117] "RemoveContainer" containerID="fa564fcb2099dcb27e38c8e170b12f47d0304b1cd480eed81f44f93db65952b4" Dec 02 13:50:37 crc kubenswrapper[4625]: I1202 13:50:37.671943 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5pmg6"] Dec 02 13:50:37 crc kubenswrapper[4625]: I1202 13:50:37.680848 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5pmg6"] Dec 02 13:50:37 crc kubenswrapper[4625]: I1202 13:50:37.685045 4625 scope.go:117] "RemoveContainer" containerID="173deed824c7d1998f912deb3c1673586ab1dfa4e9f769dc7366d0662deb459c" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.118060 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnz8w"] Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.118485 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cnz8w" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerName="registry-server" containerID="cri-o://1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd" gracePeriod=2 Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.509244 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.609056 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ndbc\" (UniqueName: \"kubernetes.io/projected/b3dd5657-6642-43f3-922f-37dea47fe07a-kube-api-access-8ndbc\") pod \"b3dd5657-6642-43f3-922f-37dea47fe07a\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.609116 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-utilities\") pod \"b3dd5657-6642-43f3-922f-37dea47fe07a\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.609240 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-catalog-content\") pod \"b3dd5657-6642-43f3-922f-37dea47fe07a\" (UID: \"b3dd5657-6642-43f3-922f-37dea47fe07a\") " Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.610221 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-utilities" (OuterVolumeSpecName: "utilities") pod "b3dd5657-6642-43f3-922f-37dea47fe07a" (UID: "b3dd5657-6642-43f3-922f-37dea47fe07a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.617824 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3dd5657-6642-43f3-922f-37dea47fe07a-kube-api-access-8ndbc" (OuterVolumeSpecName: "kube-api-access-8ndbc") pod "b3dd5657-6642-43f3-922f-37dea47fe07a" (UID: "b3dd5657-6642-43f3-922f-37dea47fe07a"). InnerVolumeSpecName "kube-api-access-8ndbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.643094 4625 generic.go:334] "Generic (PLEG): container finished" podID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerID="1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd" exitCode=0 Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.643173 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnz8w" event={"ID":"b3dd5657-6642-43f3-922f-37dea47fe07a","Type":"ContainerDied","Data":"1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd"} Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.643227 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnz8w" event={"ID":"b3dd5657-6642-43f3-922f-37dea47fe07a","Type":"ContainerDied","Data":"2629cfac8c067738f03e9db486e8af88df903505b7ca49a579b7b23271dedfe5"} Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.644347 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnz8w" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.645436 4625 scope.go:117] "RemoveContainer" containerID="1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.672962 4625 scope.go:117] "RemoveContainer" containerID="4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.703628 4625 scope.go:117] "RemoveContainer" containerID="12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.711389 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ndbc\" (UniqueName: \"kubernetes.io/projected/b3dd5657-6642-43f3-922f-37dea47fe07a-kube-api-access-8ndbc\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.711423 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.723111 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3dd5657-6642-43f3-922f-37dea47fe07a" (UID: "b3dd5657-6642-43f3-922f-37dea47fe07a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.736002 4625 scope.go:117] "RemoveContainer" containerID="1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd" Dec 02 13:50:38 crc kubenswrapper[4625]: E1202 13:50:38.736960 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd\": container with ID starting with 1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd not found: ID does not exist" containerID="1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.737035 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd"} err="failed to get container status \"1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd\": rpc error: code = NotFound desc = could not find container \"1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd\": container with ID starting with 1f9b42c3d63b115c4aec053f1dfebbced0cf80367a130a2e10be2722b70cb2cd not found: ID does not exist" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.737098 4625 scope.go:117] "RemoveContainer" containerID="4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a" Dec 02 13:50:38 crc kubenswrapper[4625]: E1202 13:50:38.737498 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a\": container with ID starting with 4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a not found: ID does not exist" containerID="4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.737524 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a"} err="failed to get container status \"4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a\": rpc error: code = NotFound desc = could not find container \"4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a\": container with ID starting with 4fc3b4103ecec4ebbd02a7feb4fff6afd97c529a9e44346425366e698920d94a not found: ID does not exist" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.737540 4625 scope.go:117] "RemoveContainer" containerID="12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e" Dec 02 13:50:38 crc kubenswrapper[4625]: E1202 13:50:38.737767 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e\": container with ID starting with 12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e not found: ID does not exist" containerID="12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.737792 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e"} err="failed to get container status \"12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e\": rpc error: code = NotFound desc = could not find container \"12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e\": container with ID starting with 12ccf97615e677093a387153128cdf81b7c3a1dd9323bb64144f9556f38b753e not found: ID does not exist" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.813548 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3dd5657-6642-43f3-922f-37dea47fe07a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.865366 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" path="/var/lib/kubelet/pods/36ff365b-030a-4ee4-9819-c5c41464213d/volumes" Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.968086 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnz8w"] Dec 02 13:50:38 crc kubenswrapper[4625]: I1202 13:50:38.976062 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cnz8w"] Dec 02 13:50:40 crc kubenswrapper[4625]: I1202 13:50:40.865715 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" path="/var/lib/kubelet/pods/b3dd5657-6642-43f3-922f-37dea47fe07a/volumes" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.382122 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t9kbs"] Dec 02 13:50:45 crc kubenswrapper[4625]: E1202 13:50:45.383940 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerName="extract-utilities" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.383984 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerName="extract-utilities" Dec 02 13:50:45 crc kubenswrapper[4625]: E1202 13:50:45.384004 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerName="extract-utilities" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384013 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerName="extract-utilities" Dec 02 13:50:45 crc kubenswrapper[4625]: E1202 13:50:45.384038 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerName="registry-server" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384048 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerName="registry-server" Dec 02 13:50:45 crc kubenswrapper[4625]: E1202 13:50:45.384072 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerName="extract-content" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384081 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerName="extract-content" Dec 02 13:50:45 crc kubenswrapper[4625]: E1202 13:50:45.384092 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" containerName="extract-utilities" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384101 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" containerName="extract-utilities" Dec 02 13:50:45 crc kubenswrapper[4625]: E1202 13:50:45.384122 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" containerName="extract-content" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384131 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" containerName="extract-content" Dec 02 13:50:45 crc kubenswrapper[4625]: E1202 13:50:45.384181 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" containerName="registry-server" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384190 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" containerName="registry-server" Dec 02 13:50:45 crc kubenswrapper[4625]: E1202 13:50:45.384212 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerName="extract-content" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384221 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerName="extract-content" Dec 02 13:50:45 crc kubenswrapper[4625]: E1202 13:50:45.384242 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerName="registry-server" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384250 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerName="registry-server" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384616 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24375bb-53a2-4ee7-992e-4d57c2293536" containerName="registry-server" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384718 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ff365b-030a-4ee4-9819-c5c41464213d" containerName="registry-server" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.384746 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3dd5657-6642-43f3-922f-37dea47fe07a" containerName="registry-server" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.387831 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.430991 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t9kbs"] Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.514263 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.514358 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.514391 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-bound-sa-token\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.514493 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-trusted-ca\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.514517 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-registry-certificates\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.514548 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.514567 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-registry-tls\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.514583 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2rwg\" (UniqueName: \"kubernetes.io/projected/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-kube-api-access-w2rwg\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.545814 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.615971 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.616054 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-bound-sa-token\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.616164 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-trusted-ca\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.616196 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-registry-certificates\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.616234 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.616266 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-registry-tls\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.616288 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2rwg\" (UniqueName: \"kubernetes.io/projected/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-kube-api-access-w2rwg\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.616542 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.618058 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-trusted-ca\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.618113 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-registry-certificates\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.626218 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.626272 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-registry-tls\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.638010 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-bound-sa-token\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.643367 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2rwg\" (UniqueName: \"kubernetes.io/projected/bbeae6fd-fc99-4981-8b4b-e26d59865b6d-kube-api-access-w2rwg\") pod \"image-registry-66df7c8f76-t9kbs\" (UID: \"bbeae6fd-fc99-4981-8b4b-e26d59865b6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:45 crc kubenswrapper[4625]: I1202 13:50:45.732030 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:46 crc kubenswrapper[4625]: I1202 13:50:46.230661 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t9kbs"] Dec 02 13:50:46 crc kubenswrapper[4625]: I1202 13:50:46.704409 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" event={"ID":"bbeae6fd-fc99-4981-8b4b-e26d59865b6d","Type":"ContainerStarted","Data":"d632d3b7dcd3a92e73b6f038605eedd64ab834122369c5cac68c8db376b3288a"} Dec 02 13:50:46 crc kubenswrapper[4625]: I1202 13:50:46.704482 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" event={"ID":"bbeae6fd-fc99-4981-8b4b-e26d59865b6d","Type":"ContainerStarted","Data":"e3b969218a974fe1a7cfeed38fbbb5aa749f4a80cd22a085e142e7f93906b378"} Dec 02 13:50:46 crc kubenswrapper[4625]: I1202 13:50:46.704590 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:50:46 crc kubenswrapper[4625]: I1202 13:50:46.722187 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" podStartSLOduration=1.7221663980000002 podStartE2EDuration="1.722166398s" podCreationTimestamp="2025-12-02 13:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:50:46.720510843 +0000 UTC m=+402.682687928" watchObservedRunningTime="2025-12-02 13:50:46.722166398 +0000 UTC m=+402.684343463" Dec 02 13:50:49 crc kubenswrapper[4625]: I1202 13:50:49.271547 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:50:49 crc kubenswrapper[4625]: I1202 13:50:49.273048 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:50:53 crc kubenswrapper[4625]: I1202 13:50:53.417050 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc687dd9c-k669l"] Dec 02 13:50:53 crc kubenswrapper[4625]: I1202 13:50:53.417656 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" podUID="ef9f9646-7f20-40d0-82f0-540de416aa5e" containerName="controller-manager" containerID="cri-o://b527ab965cb23f05a3e7e6b47cc7e9a63601d8e9a53f05addbc24b7ec92cb690" gracePeriod=30 Dec 02 13:50:53 crc kubenswrapper[4625]: I1202 13:50:53.764223 4625 generic.go:334] "Generic (PLEG): container finished" podID="ef9f9646-7f20-40d0-82f0-540de416aa5e" containerID="b527ab965cb23f05a3e7e6b47cc7e9a63601d8e9a53f05addbc24b7ec92cb690" exitCode=0 Dec 02 13:50:53 crc kubenswrapper[4625]: I1202 13:50:53.764283 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" event={"ID":"ef9f9646-7f20-40d0-82f0-540de416aa5e","Type":"ContainerDied","Data":"b527ab965cb23f05a3e7e6b47cc7e9a63601d8e9a53f05addbc24b7ec92cb690"} Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.000884 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.088527 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-config\") pod \"ef9f9646-7f20-40d0-82f0-540de416aa5e\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.089175 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-client-ca\") pod \"ef9f9646-7f20-40d0-82f0-540de416aa5e\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.089218 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9f9646-7f20-40d0-82f0-540de416aa5e-serving-cert\") pod \"ef9f9646-7f20-40d0-82f0-540de416aa5e\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.089282 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-proxy-ca-bundles\") pod \"ef9f9646-7f20-40d0-82f0-540de416aa5e\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.089350 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x58cl\" (UniqueName: \"kubernetes.io/projected/ef9f9646-7f20-40d0-82f0-540de416aa5e-kube-api-access-x58cl\") pod \"ef9f9646-7f20-40d0-82f0-540de416aa5e\" (UID: \"ef9f9646-7f20-40d0-82f0-540de416aa5e\") " Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.089657 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-config" (OuterVolumeSpecName: "config") pod "ef9f9646-7f20-40d0-82f0-540de416aa5e" (UID: "ef9f9646-7f20-40d0-82f0-540de416aa5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.089778 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.089802 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef9f9646-7f20-40d0-82f0-540de416aa5e" (UID: "ef9f9646-7f20-40d0-82f0-540de416aa5e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.089904 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ef9f9646-7f20-40d0-82f0-540de416aa5e" (UID: "ef9f9646-7f20-40d0-82f0-540de416aa5e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.095423 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef9f9646-7f20-40d0-82f0-540de416aa5e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef9f9646-7f20-40d0-82f0-540de416aa5e" (UID: "ef9f9646-7f20-40d0-82f0-540de416aa5e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.096491 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9f9646-7f20-40d0-82f0-540de416aa5e-kube-api-access-x58cl" (OuterVolumeSpecName: "kube-api-access-x58cl") pod "ef9f9646-7f20-40d0-82f0-540de416aa5e" (UID: "ef9f9646-7f20-40d0-82f0-540de416aa5e"). InnerVolumeSpecName "kube-api-access-x58cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.191174 4625 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.191232 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9f9646-7f20-40d0-82f0-540de416aa5e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.191243 4625 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9f9646-7f20-40d0-82f0-540de416aa5e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.191258 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x58cl\" (UniqueName: \"kubernetes.io/projected/ef9f9646-7f20-40d0-82f0-540de416aa5e-kube-api-access-x58cl\") on node \"crc\" DevicePath \"\"" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.771328 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" event={"ID":"ef9f9646-7f20-40d0-82f0-540de416aa5e","Type":"ContainerDied","Data":"cb37af77943ee7b1b8edef93e18644920306b1427a41648c98db21be694f43da"} Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.771383 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc687dd9c-k669l" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.771389 4625 scope.go:117] "RemoveContainer" containerID="b527ab965cb23f05a3e7e6b47cc7e9a63601d8e9a53f05addbc24b7ec92cb690" Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.817821 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc687dd9c-k669l"] Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.824701 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cc687dd9c-k669l"] Dec 02 13:50:54 crc kubenswrapper[4625]: I1202 13:50:54.862805 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9f9646-7f20-40d0-82f0-540de416aa5e" path="/var/lib/kubelet/pods/ef9f9646-7f20-40d0-82f0-540de416aa5e/volumes" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.029540 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4"] Dec 02 13:50:55 crc kubenswrapper[4625]: E1202 13:50:55.030230 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9f9646-7f20-40d0-82f0-540de416aa5e" containerName="controller-manager" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.030335 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9f9646-7f20-40d0-82f0-540de416aa5e" containerName="controller-manager" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.030547 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef9f9646-7f20-40d0-82f0-540de416aa5e" containerName="controller-manager" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.031142 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.036197 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.036709 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.037141 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.037269 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.037563 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.038038 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.042447 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.048285 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4"] Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.106594 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e644f98-85aa-44d9-9e40-6e461b56734e-proxy-ca-bundles\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.106904 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzp2w\" (UniqueName: \"kubernetes.io/projected/7e644f98-85aa-44d9-9e40-6e461b56734e-kube-api-access-zzp2w\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.107021 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e644f98-85aa-44d9-9e40-6e461b56734e-client-ca\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.107111 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e644f98-85aa-44d9-9e40-6e461b56734e-serving-cert\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.107196 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e644f98-85aa-44d9-9e40-6e461b56734e-config\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.208101 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e644f98-85aa-44d9-9e40-6e461b56734e-proxy-ca-bundles\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.208168 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzp2w\" (UniqueName: \"kubernetes.io/projected/7e644f98-85aa-44d9-9e40-6e461b56734e-kube-api-access-zzp2w\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.208246 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e644f98-85aa-44d9-9e40-6e461b56734e-client-ca\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.208274 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e644f98-85aa-44d9-9e40-6e461b56734e-serving-cert\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.208300 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e644f98-85aa-44d9-9e40-6e461b56734e-config\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.209380 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e644f98-85aa-44d9-9e40-6e461b56734e-client-ca\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.209865 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e644f98-85aa-44d9-9e40-6e461b56734e-config\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.210393 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e644f98-85aa-44d9-9e40-6e461b56734e-proxy-ca-bundles\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.213479 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e644f98-85aa-44d9-9e40-6e461b56734e-serving-cert\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.231239 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzp2w\" (UniqueName: \"kubernetes.io/projected/7e644f98-85aa-44d9-9e40-6e461b56734e-kube-api-access-zzp2w\") pod \"controller-manager-6f4458dcb4-wgpg4\" (UID: \"7e644f98-85aa-44d9-9e40-6e461b56734e\") " pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.348950 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:55 crc kubenswrapper[4625]: I1202 13:50:55.759744 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4"] Dec 02 13:50:55 crc kubenswrapper[4625]: W1202 13:50:55.771570 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e644f98_85aa_44d9_9e40_6e461b56734e.slice/crio-4a78d0e4d923003c0c6f3b30c519c2e61c0d829e7124195dd430fdd13dd375ac WatchSource:0}: Error finding container 4a78d0e4d923003c0c6f3b30c519c2e61c0d829e7124195dd430fdd13dd375ac: Status 404 returned error can't find the container with id 4a78d0e4d923003c0c6f3b30c519c2e61c0d829e7124195dd430fdd13dd375ac Dec 02 13:50:56 crc kubenswrapper[4625]: I1202 13:50:56.786103 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" event={"ID":"7e644f98-85aa-44d9-9e40-6e461b56734e","Type":"ContainerStarted","Data":"0c1fa5818fc9c35f12c172382d79a97597afa5b014b2cd250a2e060f43b0a13c"} Dec 02 13:50:56 crc kubenswrapper[4625]: I1202 13:50:56.786442 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" event={"ID":"7e644f98-85aa-44d9-9e40-6e461b56734e","Type":"ContainerStarted","Data":"4a78d0e4d923003c0c6f3b30c519c2e61c0d829e7124195dd430fdd13dd375ac"} Dec 02 13:50:56 crc kubenswrapper[4625]: I1202 13:50:56.786465 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:56 crc kubenswrapper[4625]: I1202 13:50:56.792449 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" Dec 02 13:50:56 crc kubenswrapper[4625]: I1202 13:50:56.807575 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f4458dcb4-wgpg4" podStartSLOduration=3.807555368 podStartE2EDuration="3.807555368s" podCreationTimestamp="2025-12-02 13:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:50:56.802877011 +0000 UTC m=+412.765054086" watchObservedRunningTime="2025-12-02 13:50:56.807555368 +0000 UTC m=+412.769732443" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.062962 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ds7zw"] Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.069937 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ds7zw" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerName="registry-server" containerID="cri-o://b6ecacc018ddbe7bc6af5a2858a3b5032672da150a18d969c69cd6535c3d1c5c" gracePeriod=30 Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.074476 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r84nn"] Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.074828 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r84nn" podUID="039b4452-411a-43c5-9823-860c079e5de3" containerName="registry-server" containerID="cri-o://b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb" gracePeriod=30 Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.085237 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hm5k5"] Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.085487 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" containerID="cri-o://a8b5b7690802899bbf7f6a18be22c374122e8f664e0ae4e4911c46cf73ed43f2" gracePeriod=30 Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.099429 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-77pll"] Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.099811 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-77pll" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerName="registry-server" containerID="cri-o://0388556eb45ea4134affbb114ce7b1e8d52eb08118e613079cbdcadb68cd1afc" gracePeriod=30 Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.117857 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwb28"] Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.118202 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zwb28" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerName="registry-server" containerID="cri-o://f719411005de7a9353f80da4d4f62c2862aa343d49d832879ff14745e96d178b" gracePeriod=30 Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.131439 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lm5sj"] Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.133024 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.201949 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lm5sj"] Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.253770 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/82f63ecf-aa95-429e-a39a-796125dfa29c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lm5sj\" (UID: \"82f63ecf-aa95-429e-a39a-796125dfa29c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.253860 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82f63ecf-aa95-429e-a39a-796125dfa29c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lm5sj\" (UID: \"82f63ecf-aa95-429e-a39a-796125dfa29c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.253998 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsbwd\" (UniqueName: \"kubernetes.io/projected/82f63ecf-aa95-429e-a39a-796125dfa29c-kube-api-access-wsbwd\") pod \"marketplace-operator-79b997595-lm5sj\" (UID: \"82f63ecf-aa95-429e-a39a-796125dfa29c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.356183 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/82f63ecf-aa95-429e-a39a-796125dfa29c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lm5sj\" (UID: \"82f63ecf-aa95-429e-a39a-796125dfa29c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.356224 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82f63ecf-aa95-429e-a39a-796125dfa29c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lm5sj\" (UID: \"82f63ecf-aa95-429e-a39a-796125dfa29c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.361524 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsbwd\" (UniqueName: \"kubernetes.io/projected/82f63ecf-aa95-429e-a39a-796125dfa29c-kube-api-access-wsbwd\") pod \"marketplace-operator-79b997595-lm5sj\" (UID: \"82f63ecf-aa95-429e-a39a-796125dfa29c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.383623 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82f63ecf-aa95-429e-a39a-796125dfa29c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lm5sj\" (UID: \"82f63ecf-aa95-429e-a39a-796125dfa29c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.389475 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/82f63ecf-aa95-429e-a39a-796125dfa29c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lm5sj\" (UID: \"82f63ecf-aa95-429e-a39a-796125dfa29c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.395959 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsbwd\" (UniqueName: \"kubernetes.io/projected/82f63ecf-aa95-429e-a39a-796125dfa29c-kube-api-access-wsbwd\") pod \"marketplace-operator-79b997595-lm5sj\" (UID: \"82f63ecf-aa95-429e-a39a-796125dfa29c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.469092 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.757660 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.872199 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-utilities\") pod \"039b4452-411a-43c5-9823-860c079e5de3\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.872326 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npdlf\" (UniqueName: \"kubernetes.io/projected/039b4452-411a-43c5-9823-860c079e5de3-kube-api-access-npdlf\") pod \"039b4452-411a-43c5-9823-860c079e5de3\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.872354 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-catalog-content\") pod \"039b4452-411a-43c5-9823-860c079e5de3\" (UID: \"039b4452-411a-43c5-9823-860c079e5de3\") " Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.873675 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-utilities" (OuterVolumeSpecName: "utilities") pod "039b4452-411a-43c5-9823-860c079e5de3" (UID: "039b4452-411a-43c5-9823-860c079e5de3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.878675 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/039b4452-411a-43c5-9823-860c079e5de3-kube-api-access-npdlf" (OuterVolumeSpecName: "kube-api-access-npdlf") pod "039b4452-411a-43c5-9823-860c079e5de3" (UID: "039b4452-411a-43c5-9823-860c079e5de3"). InnerVolumeSpecName "kube-api-access-npdlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.954167 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "039b4452-411a-43c5-9823-860c079e5de3" (UID: "039b4452-411a-43c5-9823-860c079e5de3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.973762 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.973799 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npdlf\" (UniqueName: \"kubernetes.io/projected/039b4452-411a-43c5-9823-860c079e5de3-kube-api-access-npdlf\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.973811 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/039b4452-411a-43c5-9823-860c079e5de3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.981632 4625 generic.go:334] "Generic (PLEG): container finished" podID="039b4452-411a-43c5-9823-860c079e5de3" containerID="b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb" exitCode=0 Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.981771 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r84nn" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.981752 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r84nn" event={"ID":"039b4452-411a-43c5-9823-860c079e5de3","Type":"ContainerDied","Data":"b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb"} Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.981956 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r84nn" event={"ID":"039b4452-411a-43c5-9823-860c079e5de3","Type":"ContainerDied","Data":"bab18a710c725db189e22c94298d400fae2ca585fd3a21b21286928031fa796c"} Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.981996 4625 scope.go:117] "RemoveContainer" containerID="b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb" Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.986659 4625 generic.go:334] "Generic (PLEG): container finished" podID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerID="f719411005de7a9353f80da4d4f62c2862aa343d49d832879ff14745e96d178b" exitCode=0 Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.986761 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwb28" event={"ID":"4d302da4-c96b-4efd-be3e-104812b4adfa","Type":"ContainerDied","Data":"f719411005de7a9353f80da4d4f62c2862aa343d49d832879ff14745e96d178b"} Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.992173 4625 generic.go:334] "Generic (PLEG): container finished" podID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerID="a8b5b7690802899bbf7f6a18be22c374122e8f664e0ae4e4911c46cf73ed43f2" exitCode=0 Dec 02 13:51:03 crc kubenswrapper[4625]: I1202 13:51:03.992262 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" event={"ID":"4065d249-ffb1-406a-9e88-b6b97cf70f2a","Type":"ContainerDied","Data":"a8b5b7690802899bbf7f6a18be22c374122e8f664e0ae4e4911c46cf73ed43f2"} Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.000857 4625 generic.go:334] "Generic (PLEG): container finished" podID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerID="b6ecacc018ddbe7bc6af5a2858a3b5032672da150a18d969c69cd6535c3d1c5c" exitCode=0 Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.000938 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds7zw" event={"ID":"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7","Type":"ContainerDied","Data":"b6ecacc018ddbe7bc6af5a2858a3b5032672da150a18d969c69cd6535c3d1c5c"} Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.019178 4625 scope.go:117] "RemoveContainer" containerID="1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.027589 4625 generic.go:334] "Generic (PLEG): container finished" podID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerID="0388556eb45ea4134affbb114ce7b1e8d52eb08118e613079cbdcadb68cd1afc" exitCode=0 Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.027752 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77pll" event={"ID":"a736316a-06cf-4768-bb70-f5c9ed61de8f","Type":"ContainerDied","Data":"0388556eb45ea4134affbb114ce7b1e8d52eb08118e613079cbdcadb68cd1afc"} Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.048104 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r84nn"] Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.055047 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r84nn"] Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.077823 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.080043 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.080466 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.088653 4625 scope.go:117] "RemoveContainer" containerID="92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.117122 4625 scope.go:117] "RemoveContainer" containerID="b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb" Dec 02 13:51:04 crc kubenswrapper[4625]: E1202 13:51:04.118058 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb\": container with ID starting with b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb not found: ID does not exist" containerID="b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.118093 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb"} err="failed to get container status \"b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb\": rpc error: code = NotFound desc = could not find container \"b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb\": container with ID starting with b86c9c281c3df1ce101290e3cd66601b45d7f07b5f9479f745210e3b0fa82ffb not found: ID does not exist" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.118124 4625 scope.go:117] "RemoveContainer" containerID="1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770" Dec 02 13:51:04 crc kubenswrapper[4625]: E1202 13:51:04.123685 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770\": container with ID starting with 1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770 not found: ID does not exist" containerID="1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.123721 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770"} err="failed to get container status \"1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770\": rpc error: code = NotFound desc = could not find container \"1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770\": container with ID starting with 1243a7e1434966798e2c63369e586fa7511771c5f3880ba200e37025c2cbb770 not found: ID does not exist" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.123751 4625 scope.go:117] "RemoveContainer" containerID="92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5" Dec 02 13:51:04 crc kubenswrapper[4625]: E1202 13:51:04.144706 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5\": container with ID starting with 92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5 not found: ID does not exist" containerID="92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.144768 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5"} err="failed to get container status \"92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5\": rpc error: code = NotFound desc = could not find container \"92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5\": container with ID starting with 92de1b6a3e4c7f50074a9abf0d550d6d97d888bac452bfa974817f8180d20cd5 not found: ID does not exist" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.144798 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.144806 4625 scope.go:117] "RemoveContainer" containerID="1f06189811257927c555243199630b1596c6d73dcd32d73fefe67438d2d3faaf" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.179843 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77bq8\" (UniqueName: \"kubernetes.io/projected/4d302da4-c96b-4efd-be3e-104812b4adfa-kube-api-access-77bq8\") pod \"4d302da4-c96b-4efd-be3e-104812b4adfa\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.179972 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-utilities\") pod \"a736316a-06cf-4768-bb70-f5c9ed61de8f\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.180020 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqzw5\" (UniqueName: \"kubernetes.io/projected/a736316a-06cf-4768-bb70-f5c9ed61de8f-kube-api-access-lqzw5\") pod \"a736316a-06cf-4768-bb70-f5c9ed61de8f\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.180048 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-catalog-content\") pod \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.180111 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-catalog-content\") pod \"4d302da4-c96b-4efd-be3e-104812b4adfa\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.180132 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnpd4\" (UniqueName: \"kubernetes.io/projected/4065d249-ffb1-406a-9e88-b6b97cf70f2a-kube-api-access-bnpd4\") pod \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.180197 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-trusted-ca\") pod \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.180235 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-utilities\") pod \"4d302da4-c96b-4efd-be3e-104812b4adfa\" (UID: \"4d302da4-c96b-4efd-be3e-104812b4adfa\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.180359 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-catalog-content\") pod \"a736316a-06cf-4768-bb70-f5c9ed61de8f\" (UID: \"a736316a-06cf-4768-bb70-f5c9ed61de8f\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.180394 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99rgn\" (UniqueName: \"kubernetes.io/projected/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-kube-api-access-99rgn\") pod \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.180470 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-operator-metrics\") pod \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\" (UID: \"4065d249-ffb1-406a-9e88-b6b97cf70f2a\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.180629 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-utilities\") pod \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\" (UID: \"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7\") " Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.184694 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "4065d249-ffb1-406a-9e88-b6b97cf70f2a" (UID: "4065d249-ffb1-406a-9e88-b6b97cf70f2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.186260 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-utilities" (OuterVolumeSpecName: "utilities") pod "4d302da4-c96b-4efd-be3e-104812b4adfa" (UID: "4d302da4-c96b-4efd-be3e-104812b4adfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.194210 4625 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.194239 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.196878 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-utilities" (OuterVolumeSpecName: "utilities") pod "b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" (UID: "b8ede536-4ca2-48e0-ac63-7efdd3ec5de7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.198271 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-utilities" (OuterVolumeSpecName: "utilities") pod "a736316a-06cf-4768-bb70-f5c9ed61de8f" (UID: "a736316a-06cf-4768-bb70-f5c9ed61de8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.236014 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a736316a-06cf-4768-bb70-f5c9ed61de8f" (UID: "a736316a-06cf-4768-bb70-f5c9ed61de8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.252445 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a736316a-06cf-4768-bb70-f5c9ed61de8f-kube-api-access-lqzw5" (OuterVolumeSpecName: "kube-api-access-lqzw5") pod "a736316a-06cf-4768-bb70-f5c9ed61de8f" (UID: "a736316a-06cf-4768-bb70-f5c9ed61de8f"). InnerVolumeSpecName "kube-api-access-lqzw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.252623 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-kube-api-access-99rgn" (OuterVolumeSpecName: "kube-api-access-99rgn") pod "b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" (UID: "b8ede536-4ca2-48e0-ac63-7efdd3ec5de7"). InnerVolumeSpecName "kube-api-access-99rgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.254932 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4065d249-ffb1-406a-9e88-b6b97cf70f2a-kube-api-access-bnpd4" (OuterVolumeSpecName: "kube-api-access-bnpd4") pod "4065d249-ffb1-406a-9e88-b6b97cf70f2a" (UID: "4065d249-ffb1-406a-9e88-b6b97cf70f2a"). InnerVolumeSpecName "kube-api-access-bnpd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.255268 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d302da4-c96b-4efd-be3e-104812b4adfa-kube-api-access-77bq8" (OuterVolumeSpecName: "kube-api-access-77bq8") pod "4d302da4-c96b-4efd-be3e-104812b4adfa" (UID: "4d302da4-c96b-4efd-be3e-104812b4adfa"). InnerVolumeSpecName "kube-api-access-77bq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.293393 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lm5sj"] Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.293715 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "4065d249-ffb1-406a-9e88-b6b97cf70f2a" (UID: "4065d249-ffb1-406a-9e88-b6b97cf70f2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.296863 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.296934 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77bq8\" (UniqueName: \"kubernetes.io/projected/4d302da4-c96b-4efd-be3e-104812b4adfa-kube-api-access-77bq8\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.296954 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.297023 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqzw5\" (UniqueName: \"kubernetes.io/projected/a736316a-06cf-4768-bb70-f5c9ed61de8f-kube-api-access-lqzw5\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.297042 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnpd4\" (UniqueName: \"kubernetes.io/projected/4065d249-ffb1-406a-9e88-b6b97cf70f2a-kube-api-access-bnpd4\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.297056 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a736316a-06cf-4768-bb70-f5c9ed61de8f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.297099 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99rgn\" (UniqueName: \"kubernetes.io/projected/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-kube-api-access-99rgn\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.297119 4625 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4065d249-ffb1-406a-9e88-b6b97cf70f2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.387236 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" (UID: "b8ede536-4ca2-48e0-ac63-7efdd3ec5de7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.404763 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.443700 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d302da4-c96b-4efd-be3e-104812b4adfa" (UID: "4d302da4-c96b-4efd-be3e-104812b4adfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.506625 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d302da4-c96b-4efd-be3e-104812b4adfa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:04 crc kubenswrapper[4625]: I1202 13:51:04.868297 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="039b4452-411a-43c5-9823-860c079e5de3" path="/var/lib/kubelet/pods/039b4452-411a-43c5-9823-860c079e5de3/volumes" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.036602 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ds7zw" event={"ID":"b8ede536-4ca2-48e0-ac63-7efdd3ec5de7","Type":"ContainerDied","Data":"38d40aa46fc697def0883acfdb0052cf016cd8084bf994e0a0f411fefb00cce6"} Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.036657 4625 scope.go:117] "RemoveContainer" containerID="b6ecacc018ddbe7bc6af5a2858a3b5032672da150a18d969c69cd6535c3d1c5c" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.036757 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ds7zw" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.043474 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77pll" event={"ID":"a736316a-06cf-4768-bb70-f5c9ed61de8f","Type":"ContainerDied","Data":"9078691d8799800387b35fb8fa8d09cf8c2cfb9be263752c3d61060b764f85d6"} Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.043564 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77pll" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.049052 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwb28" event={"ID":"4d302da4-c96b-4efd-be3e-104812b4adfa","Type":"ContainerDied","Data":"49fd83f8636d85002a58e5d522732fe753bac986bf4ab04027ca82ddcd70693c"} Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.049078 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwb28" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.052278 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" event={"ID":"82f63ecf-aa95-429e-a39a-796125dfa29c","Type":"ContainerStarted","Data":"c4df3cfb576a463daafa972fbc4624a53aba7b793e46078f35147f03dcb6fa85"} Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.052339 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" event={"ID":"82f63ecf-aa95-429e-a39a-796125dfa29c","Type":"ContainerStarted","Data":"6a274ff50b01f76377d3995e3e1f8cdc94456a4c61f919016d75f39ff754133e"} Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.053989 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.056929 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" event={"ID":"4065d249-ffb1-406a-9e88-b6b97cf70f2a","Type":"ContainerDied","Data":"d763f5317fa42a4122c1cca0d85e09ee3b5941821b6598a434175d82d1f84669"} Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.057217 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hm5k5" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.102327 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.134951 4625 scope.go:117] "RemoveContainer" containerID="353646e2114f73f4554d843ebab41ae1863363342dfba615bc2dc33143c2d2f9" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.144872 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lm5sj" podStartSLOduration=2.144841892 podStartE2EDuration="2.144841892s" podCreationTimestamp="2025-12-02 13:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:51:05.143613649 +0000 UTC m=+421.105790724" watchObservedRunningTime="2025-12-02 13:51:05.144841892 +0000 UTC m=+421.107018967" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.174821 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ds7zw"] Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.187887 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ds7zw"] Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.205111 4625 scope.go:117] "RemoveContainer" containerID="bc4dfc67d8976a947041980b6e619a956e649951911e270dbe61a91ef7d03eb2" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.234108 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hm5k5"] Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.253413 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hm5k5"] Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.265437 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwb28"] Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.274045 4625 scope.go:117] "RemoveContainer" containerID="0388556eb45ea4134affbb114ce7b1e8d52eb08118e613079cbdcadb68cd1afc" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.276570 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zwb28"] Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.282643 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-77pll"] Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.292132 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-77pll"] Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.297710 4625 scope.go:117] "RemoveContainer" containerID="4a85a3358a925efd7ff5f6090d702f77db3281fc5cb965356015d4f7bf7f1870" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.336939 4625 scope.go:117] "RemoveContainer" containerID="fcfb8bbc724ce9d5883a6947b3907c2b8535c88216c311587d2e572ede040d1b" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.375395 4625 scope.go:117] "RemoveContainer" containerID="f719411005de7a9353f80da4d4f62c2862aa343d49d832879ff14745e96d178b" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.408628 4625 scope.go:117] "RemoveContainer" containerID="697d4f9f321cc6f112101e7dd5f1e3d51e76a2b5b7a9abc91daaad718753d269" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.452548 4625 scope.go:117] "RemoveContainer" containerID="f2713deb56806c1da2b0833f58ff0daaa107bdd8d983ad95a484383d48319bce" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.468170 4625 scope.go:117] "RemoveContainer" containerID="a8b5b7690802899bbf7f6a18be22c374122e8f664e0ae4e4911c46cf73ed43f2" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.737662 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-t9kbs" Dec 02 13:51:05 crc kubenswrapper[4625]: I1202 13:51:05.792824 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sc4p7"] Dec 02 13:51:06 crc kubenswrapper[4625]: I1202 13:51:06.871708 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" path="/var/lib/kubelet/pods/4065d249-ffb1-406a-9e88-b6b97cf70f2a/volumes" Dec 02 13:51:06 crc kubenswrapper[4625]: I1202 13:51:06.872406 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" path="/var/lib/kubelet/pods/4d302da4-c96b-4efd-be3e-104812b4adfa/volumes" Dec 02 13:51:06 crc kubenswrapper[4625]: I1202 13:51:06.873176 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" path="/var/lib/kubelet/pods/a736316a-06cf-4768-bb70-f5c9ed61de8f/volumes" Dec 02 13:51:06 crc kubenswrapper[4625]: I1202 13:51:06.874605 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" path="/var/lib/kubelet/pods/b8ede536-4ca2-48e0-ac63-7efdd3ec5de7/volumes" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.280587 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fg2pz"] Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281087 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerName="extract-content" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281099 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerName="extract-content" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281110 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerName="extract-content" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281116 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerName="extract-content" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281126 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281133 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281139 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerName="extract-content" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281145 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerName="extract-content" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281152 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="039b4452-411a-43c5-9823-860c079e5de3" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281158 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="039b4452-411a-43c5-9823-860c079e5de3" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281166 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281172 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281183 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281189 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281199 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281205 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281213 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerName="extract-utilities" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281220 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerName="extract-utilities" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281229 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerName="extract-utilities" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281235 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerName="extract-utilities" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281246 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="039b4452-411a-43c5-9823-860c079e5de3" containerName="extract-utilities" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281252 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="039b4452-411a-43c5-9823-860c079e5de3" containerName="extract-utilities" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281261 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="039b4452-411a-43c5-9823-860c079e5de3" containerName="extract-content" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281267 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="039b4452-411a-43c5-9823-860c079e5de3" containerName="extract-content" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281275 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerName="extract-utilities" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281280 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerName="extract-utilities" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281403 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ede536-4ca2-48e0-ac63-7efdd3ec5de7" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281418 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281428 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d302da4-c96b-4efd-be3e-104812b4adfa" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281439 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="a736316a-06cf-4768-bb70-f5c9ed61de8f" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281449 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="039b4452-411a-43c5-9823-860c079e5de3" containerName="registry-server" Dec 02 13:51:07 crc kubenswrapper[4625]: E1202 13:51:07.281550 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281557 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.281634 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="4065d249-ffb1-406a-9e88-b6b97cf70f2a" containerName="marketplace-operator" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.282219 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.284765 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.297251 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fg2pz"] Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.349442 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e117ec72-f216-4090-87b4-d645c924c53f-utilities\") pod \"certified-operators-fg2pz\" (UID: \"e117ec72-f216-4090-87b4-d645c924c53f\") " pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.349506 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm9zc\" (UniqueName: \"kubernetes.io/projected/e117ec72-f216-4090-87b4-d645c924c53f-kube-api-access-nm9zc\") pod \"certified-operators-fg2pz\" (UID: \"e117ec72-f216-4090-87b4-d645c924c53f\") " pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.349545 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e117ec72-f216-4090-87b4-d645c924c53f-catalog-content\") pod \"certified-operators-fg2pz\" (UID: \"e117ec72-f216-4090-87b4-d645c924c53f\") " pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.452161 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e117ec72-f216-4090-87b4-d645c924c53f-utilities\") pod \"certified-operators-fg2pz\" (UID: \"e117ec72-f216-4090-87b4-d645c924c53f\") " pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.452502 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm9zc\" (UniqueName: \"kubernetes.io/projected/e117ec72-f216-4090-87b4-d645c924c53f-kube-api-access-nm9zc\") pod \"certified-operators-fg2pz\" (UID: \"e117ec72-f216-4090-87b4-d645c924c53f\") " pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.452640 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e117ec72-f216-4090-87b4-d645c924c53f-catalog-content\") pod \"certified-operators-fg2pz\" (UID: \"e117ec72-f216-4090-87b4-d645c924c53f\") " pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.452836 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e117ec72-f216-4090-87b4-d645c924c53f-utilities\") pod \"certified-operators-fg2pz\" (UID: \"e117ec72-f216-4090-87b4-d645c924c53f\") " pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.453004 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e117ec72-f216-4090-87b4-d645c924c53f-catalog-content\") pod \"certified-operators-fg2pz\" (UID: \"e117ec72-f216-4090-87b4-d645c924c53f\") " pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.479412 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nrhd2"] Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.480723 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.482418 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm9zc\" (UniqueName: \"kubernetes.io/projected/e117ec72-f216-4090-87b4-d645c924c53f-kube-api-access-nm9zc\") pod \"certified-operators-fg2pz\" (UID: \"e117ec72-f216-4090-87b4-d645c924c53f\") " pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.483298 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.496905 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrhd2"] Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.553947 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-utilities\") pod \"community-operators-nrhd2\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.554006 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzr5w\" (UniqueName: \"kubernetes.io/projected/460994e6-261b-4787-bed8-8b4ad1d83e3d-kube-api-access-rzr5w\") pod \"community-operators-nrhd2\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.554047 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-catalog-content\") pod \"community-operators-nrhd2\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.609557 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.655366 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-utilities\") pod \"community-operators-nrhd2\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.655410 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzr5w\" (UniqueName: \"kubernetes.io/projected/460994e6-261b-4787-bed8-8b4ad1d83e3d-kube-api-access-rzr5w\") pod \"community-operators-nrhd2\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.655446 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-catalog-content\") pod \"community-operators-nrhd2\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.655859 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-catalog-content\") pod \"community-operators-nrhd2\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.656065 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-utilities\") pod \"community-operators-nrhd2\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.678697 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzr5w\" (UniqueName: \"kubernetes.io/projected/460994e6-261b-4787-bed8-8b4ad1d83e3d-kube-api-access-rzr5w\") pod \"community-operators-nrhd2\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:07 crc kubenswrapper[4625]: I1202 13:51:07.813737 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:08 crc kubenswrapper[4625]: I1202 13:51:08.046858 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fg2pz"] Dec 02 13:51:08 crc kubenswrapper[4625]: W1202 13:51:08.052541 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode117ec72_f216_4090_87b4_d645c924c53f.slice/crio-d5f19b851d2bd2ce9ca9872d2d4c18d94cb98a765af6b6e939b67f6bb0ef28d3 WatchSource:0}: Error finding container d5f19b851d2bd2ce9ca9872d2d4c18d94cb98a765af6b6e939b67f6bb0ef28d3: Status 404 returned error can't find the container with id d5f19b851d2bd2ce9ca9872d2d4c18d94cb98a765af6b6e939b67f6bb0ef28d3 Dec 02 13:51:08 crc kubenswrapper[4625]: I1202 13:51:08.079618 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg2pz" event={"ID":"e117ec72-f216-4090-87b4-d645c924c53f","Type":"ContainerStarted","Data":"d5f19b851d2bd2ce9ca9872d2d4c18d94cb98a765af6b6e939b67f6bb0ef28d3"} Dec 02 13:51:08 crc kubenswrapper[4625]: I1202 13:51:08.241903 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrhd2"] Dec 02 13:51:08 crc kubenswrapper[4625]: W1202 13:51:08.255572 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod460994e6_261b_4787_bed8_8b4ad1d83e3d.slice/crio-15677c38eca0b53395aebab074bdb9e8665347c07dd0bdc14389bee1a2b2c2ef WatchSource:0}: Error finding container 15677c38eca0b53395aebab074bdb9e8665347c07dd0bdc14389bee1a2b2c2ef: Status 404 returned error can't find the container with id 15677c38eca0b53395aebab074bdb9e8665347c07dd0bdc14389bee1a2b2c2ef Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.092187 4625 generic.go:334] "Generic (PLEG): container finished" podID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerID="14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75" exitCode=0 Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.092816 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrhd2" event={"ID":"460994e6-261b-4787-bed8-8b4ad1d83e3d","Type":"ContainerDied","Data":"14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75"} Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.092874 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrhd2" event={"ID":"460994e6-261b-4787-bed8-8b4ad1d83e3d","Type":"ContainerStarted","Data":"15677c38eca0b53395aebab074bdb9e8665347c07dd0bdc14389bee1a2b2c2ef"} Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.100738 4625 generic.go:334] "Generic (PLEG): container finished" podID="e117ec72-f216-4090-87b4-d645c924c53f" containerID="5c0e509b69d818fcb048107fe04ea8c1041f96a04587253d2c1220e0c33f860c" exitCode=0 Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.100924 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg2pz" event={"ID":"e117ec72-f216-4090-87b4-d645c924c53f","Type":"ContainerDied","Data":"5c0e509b69d818fcb048107fe04ea8c1041f96a04587253d2c1220e0c33f860c"} Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.679342 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d27sv"] Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.680697 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.684147 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.689282 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqss\" (UniqueName: \"kubernetes.io/projected/1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7-kube-api-access-2gqss\") pod \"redhat-marketplace-d27sv\" (UID: \"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7\") " pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.689394 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7-catalog-content\") pod \"redhat-marketplace-d27sv\" (UID: \"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7\") " pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.689472 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7-utilities\") pod \"redhat-marketplace-d27sv\" (UID: \"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7\") " pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.707176 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d27sv"] Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.790822 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7-catalog-content\") pod \"redhat-marketplace-d27sv\" (UID: \"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7\") " pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.790956 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7-utilities\") pod \"redhat-marketplace-d27sv\" (UID: \"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7\") " pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.791003 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gqss\" (UniqueName: \"kubernetes.io/projected/1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7-kube-api-access-2gqss\") pod \"redhat-marketplace-d27sv\" (UID: \"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7\") " pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.792024 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7-utilities\") pod \"redhat-marketplace-d27sv\" (UID: \"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7\") " pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.793964 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7-catalog-content\") pod \"redhat-marketplace-d27sv\" (UID: \"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7\") " pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.815841 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gqss\" (UniqueName: \"kubernetes.io/projected/1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7-kube-api-access-2gqss\") pod \"redhat-marketplace-d27sv\" (UID: \"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7\") " pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.877389 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sc4nh"] Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.878696 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.886985 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sc4nh"] Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.892455 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.995585 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2hkh\" (UniqueName: \"kubernetes.io/projected/435bd873-5e0f-4479-b59b-1fd1f39fd50e-kube-api-access-n2hkh\") pod \"redhat-operators-sc4nh\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.995654 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-utilities\") pod \"redhat-operators-sc4nh\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:09 crc kubenswrapper[4625]: I1202 13:51:09.995715 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-catalog-content\") pod \"redhat-operators-sc4nh\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:10 crc kubenswrapper[4625]: I1202 13:51:10.001548 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:10 crc kubenswrapper[4625]: I1202 13:51:10.104174 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2hkh\" (UniqueName: \"kubernetes.io/projected/435bd873-5e0f-4479-b59b-1fd1f39fd50e-kube-api-access-n2hkh\") pod \"redhat-operators-sc4nh\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:10 crc kubenswrapper[4625]: I1202 13:51:10.104554 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-utilities\") pod \"redhat-operators-sc4nh\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:10 crc kubenswrapper[4625]: I1202 13:51:10.104623 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-catalog-content\") pod \"redhat-operators-sc4nh\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:10 crc kubenswrapper[4625]: I1202 13:51:10.105240 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-catalog-content\") pod \"redhat-operators-sc4nh\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:10 crc kubenswrapper[4625]: I1202 13:51:10.105419 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-utilities\") pod \"redhat-operators-sc4nh\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:10 crc kubenswrapper[4625]: I1202 13:51:10.131790 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2hkh\" (UniqueName: \"kubernetes.io/projected/435bd873-5e0f-4479-b59b-1fd1f39fd50e-kube-api-access-n2hkh\") pod \"redhat-operators-sc4nh\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:10 crc kubenswrapper[4625]: I1202 13:51:10.204369 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:10 crc kubenswrapper[4625]: I1202 13:51:10.442581 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d27sv"] Dec 02 13:51:10 crc kubenswrapper[4625]: W1202 13:51:10.446392 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a3b53b2_9707_4f7e_94b7_dd3f7b8082e7.slice/crio-40899386a0da3f573df7a3aefa81eb16f130d45a81ca9eb991c997defd6bce87 WatchSource:0}: Error finding container 40899386a0da3f573df7a3aefa81eb16f130d45a81ca9eb991c997defd6bce87: Status 404 returned error can't find the container with id 40899386a0da3f573df7a3aefa81eb16f130d45a81ca9eb991c997defd6bce87 Dec 02 13:51:10 crc kubenswrapper[4625]: I1202 13:51:10.697930 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sc4nh"] Dec 02 13:51:11 crc kubenswrapper[4625]: I1202 13:51:11.123364 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc4nh" event={"ID":"435bd873-5e0f-4479-b59b-1fd1f39fd50e","Type":"ContainerStarted","Data":"3a6f1e71bf37cc79d2e1397745b59b28ef8cf84816eee283f4e65f145228baf0"} Dec 02 13:51:11 crc kubenswrapper[4625]: I1202 13:51:11.123419 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc4nh" event={"ID":"435bd873-5e0f-4479-b59b-1fd1f39fd50e","Type":"ContainerStarted","Data":"744994dee5b3449248f9fa04397448936db218b8faa4cdb6ba68db40f52495ad"} Dec 02 13:51:11 crc kubenswrapper[4625]: I1202 13:51:11.129025 4625 generic.go:334] "Generic (PLEG): container finished" podID="1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7" containerID="9a6823896772b03bf73573ba38bea8fe021aaf6316b45731e524f88326254002" exitCode=0 Dec 02 13:51:11 crc kubenswrapper[4625]: I1202 13:51:11.129124 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27sv" event={"ID":"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7","Type":"ContainerDied","Data":"9a6823896772b03bf73573ba38bea8fe021aaf6316b45731e524f88326254002"} Dec 02 13:51:11 crc kubenswrapper[4625]: I1202 13:51:11.129155 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27sv" event={"ID":"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7","Type":"ContainerStarted","Data":"40899386a0da3f573df7a3aefa81eb16f130d45a81ca9eb991c997defd6bce87"} Dec 02 13:51:11 crc kubenswrapper[4625]: I1202 13:51:11.138289 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrhd2" event={"ID":"460994e6-261b-4787-bed8-8b4ad1d83e3d","Type":"ContainerStarted","Data":"a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d"} Dec 02 13:51:11 crc kubenswrapper[4625]: I1202 13:51:11.151527 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg2pz" event={"ID":"e117ec72-f216-4090-87b4-d645c924c53f","Type":"ContainerStarted","Data":"c379cdcc02e60072d5f54a917190c2e8bdf982eb185cf597a242144b9d891791"} Dec 02 13:51:12 crc kubenswrapper[4625]: I1202 13:51:12.161659 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27sv" event={"ID":"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7","Type":"ContainerStarted","Data":"c40b888e0bcca4f628736c7786de0a267d81f1026e2cf9b993087a6829c4ea55"} Dec 02 13:51:12 crc kubenswrapper[4625]: I1202 13:51:12.163740 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrhd2" event={"ID":"460994e6-261b-4787-bed8-8b4ad1d83e3d","Type":"ContainerDied","Data":"a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d"} Dec 02 13:51:12 crc kubenswrapper[4625]: I1202 13:51:12.163744 4625 generic.go:334] "Generic (PLEG): container finished" podID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerID="a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d" exitCode=0 Dec 02 13:51:12 crc kubenswrapper[4625]: I1202 13:51:12.173423 4625 generic.go:334] "Generic (PLEG): container finished" podID="e117ec72-f216-4090-87b4-d645c924c53f" containerID="c379cdcc02e60072d5f54a917190c2e8bdf982eb185cf597a242144b9d891791" exitCode=0 Dec 02 13:51:12 crc kubenswrapper[4625]: I1202 13:51:12.174836 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg2pz" event={"ID":"e117ec72-f216-4090-87b4-d645c924c53f","Type":"ContainerDied","Data":"c379cdcc02e60072d5f54a917190c2e8bdf982eb185cf597a242144b9d891791"} Dec 02 13:51:12 crc kubenswrapper[4625]: I1202 13:51:12.179082 4625 generic.go:334] "Generic (PLEG): container finished" podID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerID="3a6f1e71bf37cc79d2e1397745b59b28ef8cf84816eee283f4e65f145228baf0" exitCode=0 Dec 02 13:51:12 crc kubenswrapper[4625]: I1202 13:51:12.179224 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc4nh" event={"ID":"435bd873-5e0f-4479-b59b-1fd1f39fd50e","Type":"ContainerDied","Data":"3a6f1e71bf37cc79d2e1397745b59b28ef8cf84816eee283f4e65f145228baf0"} Dec 02 13:51:13 crc kubenswrapper[4625]: I1202 13:51:13.190465 4625 generic.go:334] "Generic (PLEG): container finished" podID="1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7" containerID="c40b888e0bcca4f628736c7786de0a267d81f1026e2cf9b993087a6829c4ea55" exitCode=0 Dec 02 13:51:13 crc kubenswrapper[4625]: I1202 13:51:13.190550 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27sv" event={"ID":"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7","Type":"ContainerDied","Data":"c40b888e0bcca4f628736c7786de0a267d81f1026e2cf9b993087a6829c4ea55"} Dec 02 13:51:13 crc kubenswrapper[4625]: I1202 13:51:13.450115 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh"] Dec 02 13:51:13 crc kubenswrapper[4625]: I1202 13:51:13.451008 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" podUID="4f6ac22c-1b16-4c94-9616-4710e3b0ec50" containerName="route-controller-manager" containerID="cri-o://c30c2cc3a28d10eb5d40d84b9227cde60defa010a383f3e7715c98e7c2fa6913" gracePeriod=30 Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.201801 4625 generic.go:334] "Generic (PLEG): container finished" podID="4f6ac22c-1b16-4c94-9616-4710e3b0ec50" containerID="c30c2cc3a28d10eb5d40d84b9227cde60defa010a383f3e7715c98e7c2fa6913" exitCode=0 Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.202364 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" event={"ID":"4f6ac22c-1b16-4c94-9616-4710e3b0ec50","Type":"ContainerDied","Data":"c30c2cc3a28d10eb5d40d84b9227cde60defa010a383f3e7715c98e7c2fa6913"} Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.202415 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" event={"ID":"4f6ac22c-1b16-4c94-9616-4710e3b0ec50","Type":"ContainerDied","Data":"ece8285ab8eab29a93ecde6f2937813f6a7d1c7ead429baadb51d13f13d99956"} Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.202430 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece8285ab8eab29a93ecde6f2937813f6a7d1c7ead429baadb51d13f13d99956" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.468489 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.568445 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj"] Dec 02 13:51:14 crc kubenswrapper[4625]: E1202 13:51:14.569255 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6ac22c-1b16-4c94-9616-4710e3b0ec50" containerName="route-controller-manager" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.569284 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6ac22c-1b16-4c94-9616-4710e3b0ec50" containerName="route-controller-manager" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.569478 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6ac22c-1b16-4c94-9616-4710e3b0ec50" containerName="route-controller-manager" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.570055 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.579775 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kxgc\" (UniqueName: \"kubernetes.io/projected/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-kube-api-access-8kxgc\") pod \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.579908 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-serving-cert\") pod \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.580043 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-client-ca\") pod \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.580086 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-config\") pod \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\" (UID: \"4f6ac22c-1b16-4c94-9616-4710e3b0ec50\") " Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.581239 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-client-ca" (OuterVolumeSpecName: "client-ca") pod "4f6ac22c-1b16-4c94-9616-4710e3b0ec50" (UID: "4f6ac22c-1b16-4c94-9616-4710e3b0ec50"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.581262 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-config" (OuterVolumeSpecName: "config") pod "4f6ac22c-1b16-4c94-9616-4710e3b0ec50" (UID: "4f6ac22c-1b16-4c94-9616-4710e3b0ec50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.595108 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj"] Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.597881 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-kube-api-access-8kxgc" (OuterVolumeSpecName: "kube-api-access-8kxgc") pod "4f6ac22c-1b16-4c94-9616-4710e3b0ec50" (UID: "4f6ac22c-1b16-4c94-9616-4710e3b0ec50"). InnerVolumeSpecName "kube-api-access-8kxgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.599550 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4f6ac22c-1b16-4c94-9616-4710e3b0ec50" (UID: "4f6ac22c-1b16-4c94-9616-4710e3b0ec50"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.681427 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-config\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.681537 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-client-ca\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.681587 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-serving-cert\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.681641 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7p84\" (UniqueName: \"kubernetes.io/projected/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-kube-api-access-h7p84\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.681888 4625 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.681941 4625 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-client-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.681959 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.681983 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kxgc\" (UniqueName: \"kubernetes.io/projected/4f6ac22c-1b16-4c94-9616-4710e3b0ec50-kube-api-access-8kxgc\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.782897 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-client-ca\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.782982 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-serving-cert\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.783022 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7p84\" (UniqueName: \"kubernetes.io/projected/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-kube-api-access-h7p84\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.783071 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-config\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.785358 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-config\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.786185 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-client-ca\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.803626 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7p84\" (UniqueName: \"kubernetes.io/projected/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-kube-api-access-h7p84\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.813436 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8fb2bcc-cd9b-487a-b3b2-c2a46354d573-serving-cert\") pod \"route-controller-manager-9c659d5b4-gwtwj\" (UID: \"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573\") " pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:14 crc kubenswrapper[4625]: I1202 13:51:14.979394 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.238193 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc4nh" event={"ID":"435bd873-5e0f-4479-b59b-1fd1f39fd50e","Type":"ContainerStarted","Data":"1386810dd737be18d1b3e4175f6ee0511de368dbb94887294228e03fc8d7ad1f"} Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.256881 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d27sv" event={"ID":"1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7","Type":"ContainerStarted","Data":"119f34304619c7f566b9368a5befea89c2680f610352e49b5f418a85fcb27e40"} Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.276052 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrhd2" event={"ID":"460994e6-261b-4787-bed8-8b4ad1d83e3d","Type":"ContainerStarted","Data":"9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786"} Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.299185 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d27sv" podStartSLOduration=3.765948684 podStartE2EDuration="6.299161386s" podCreationTimestamp="2025-12-02 13:51:09 +0000 UTC" firstStartedPulling="2025-12-02 13:51:11.133697461 +0000 UTC m=+427.095874536" lastFinishedPulling="2025-12-02 13:51:13.666910163 +0000 UTC m=+429.629087238" observedRunningTime="2025-12-02 13:51:15.295460365 +0000 UTC m=+431.257637440" watchObservedRunningTime="2025-12-02 13:51:15.299161386 +0000 UTC m=+431.261338461" Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.300957 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh" Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.301967 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fg2pz" event={"ID":"e117ec72-f216-4090-87b4-d645c924c53f","Type":"ContainerStarted","Data":"27ae89998ef28b40fc033d129089338b42e8a84b6c73ec90a28d1b5b17fd0e77"} Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.325334 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nrhd2" podStartSLOduration=4.153626325 podStartE2EDuration="8.32529513s" podCreationTimestamp="2025-12-02 13:51:07 +0000 UTC" firstStartedPulling="2025-12-02 13:51:09.096285641 +0000 UTC m=+425.058462766" lastFinishedPulling="2025-12-02 13:51:13.267954496 +0000 UTC m=+429.230131571" observedRunningTime="2025-12-02 13:51:15.318517015 +0000 UTC m=+431.280694110" watchObservedRunningTime="2025-12-02 13:51:15.32529513 +0000 UTC m=+431.287472205" Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.369972 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fg2pz" podStartSLOduration=4.373430429 podStartE2EDuration="8.369944679s" podCreationTimestamp="2025-12-02 13:51:07 +0000 UTC" firstStartedPulling="2025-12-02 13:51:09.102935703 +0000 UTC m=+425.065112778" lastFinishedPulling="2025-12-02 13:51:13.099449953 +0000 UTC m=+429.061627028" observedRunningTime="2025-12-02 13:51:15.351949608 +0000 UTC m=+431.314126703" watchObservedRunningTime="2025-12-02 13:51:15.369944679 +0000 UTC m=+431.332121774" Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.377500 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh"] Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.383821 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c669855c-kqjjh"] Dec 02 13:51:15 crc kubenswrapper[4625]: I1202 13:51:15.557117 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj"] Dec 02 13:51:16 crc kubenswrapper[4625]: I1202 13:51:16.312789 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" event={"ID":"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573","Type":"ContainerStarted","Data":"76f199af0737d6904f0cd777b27ad24fa87b1436b3b128da01450ffa0414aa83"} Dec 02 13:51:16 crc kubenswrapper[4625]: I1202 13:51:16.313226 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" event={"ID":"d8fb2bcc-cd9b-487a-b3b2-c2a46354d573","Type":"ContainerStarted","Data":"64936d7432247b8c7a6247af3aae299a1c0d1d3b401e0afcadbd1f42f453dbbf"} Dec 02 13:51:16 crc kubenswrapper[4625]: I1202 13:51:16.313247 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:16 crc kubenswrapper[4625]: I1202 13:51:16.317055 4625 generic.go:334] "Generic (PLEG): container finished" podID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerID="1386810dd737be18d1b3e4175f6ee0511de368dbb94887294228e03fc8d7ad1f" exitCode=0 Dec 02 13:51:16 crc kubenswrapper[4625]: I1202 13:51:16.317723 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc4nh" event={"ID":"435bd873-5e0f-4479-b59b-1fd1f39fd50e","Type":"ContainerDied","Data":"1386810dd737be18d1b3e4175f6ee0511de368dbb94887294228e03fc8d7ad1f"} Dec 02 13:51:16 crc kubenswrapper[4625]: I1202 13:51:16.324751 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" Dec 02 13:51:16 crc kubenswrapper[4625]: I1202 13:51:16.339413 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9c659d5b4-gwtwj" podStartSLOduration=3.339393459 podStartE2EDuration="3.339393459s" podCreationTimestamp="2025-12-02 13:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:51:16.336298764 +0000 UTC m=+432.298475839" watchObservedRunningTime="2025-12-02 13:51:16.339393459 +0000 UTC m=+432.301570534" Dec 02 13:51:16 crc kubenswrapper[4625]: I1202 13:51:16.876657 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f6ac22c-1b16-4c94-9616-4710e3b0ec50" path="/var/lib/kubelet/pods/4f6ac22c-1b16-4c94-9616-4710e3b0ec50/volumes" Dec 02 13:51:17 crc kubenswrapper[4625]: I1202 13:51:17.325503 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc4nh" event={"ID":"435bd873-5e0f-4479-b59b-1fd1f39fd50e","Type":"ContainerStarted","Data":"0e7c5d177fbc75af5a85b78edbd3a8ef4d9703f6a788ba13f35c4f786c2c8729"} Dec 02 13:51:17 crc kubenswrapper[4625]: I1202 13:51:17.353999 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sc4nh" podStartSLOduration=3.582423792 podStartE2EDuration="8.353973331s" podCreationTimestamp="2025-12-02 13:51:09 +0000 UTC" firstStartedPulling="2025-12-02 13:51:12.181277135 +0000 UTC m=+428.143454210" lastFinishedPulling="2025-12-02 13:51:16.952826674 +0000 UTC m=+432.915003749" observedRunningTime="2025-12-02 13:51:17.349692224 +0000 UTC m=+433.311869289" watchObservedRunningTime="2025-12-02 13:51:17.353973331 +0000 UTC m=+433.316150406" Dec 02 13:51:17 crc kubenswrapper[4625]: I1202 13:51:17.610591 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:17 crc kubenswrapper[4625]: I1202 13:51:17.611327 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:17 crc kubenswrapper[4625]: I1202 13:51:17.668772 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:17 crc kubenswrapper[4625]: I1202 13:51:17.814637 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:17 crc kubenswrapper[4625]: I1202 13:51:17.815135 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:17 crc kubenswrapper[4625]: I1202 13:51:17.860178 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:19 crc kubenswrapper[4625]: I1202 13:51:19.271238 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:51:19 crc kubenswrapper[4625]: I1202 13:51:19.271798 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:51:19 crc kubenswrapper[4625]: I1202 13:51:19.271879 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:51:19 crc kubenswrapper[4625]: I1202 13:51:19.272930 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90afaf702b407cb71259af2ad7b1c5b8d7e4cfd9bc8d832ed3732d63ee2b7839"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 13:51:19 crc kubenswrapper[4625]: I1202 13:51:19.273009 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://90afaf702b407cb71259af2ad7b1c5b8d7e4cfd9bc8d832ed3732d63ee2b7839" gracePeriod=600 Dec 02 13:51:19 crc kubenswrapper[4625]: I1202 13:51:19.387678 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nrhd2" Dec 02 13:51:19 crc kubenswrapper[4625]: I1202 13:51:19.403899 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fg2pz" Dec 02 13:51:20 crc kubenswrapper[4625]: I1202 13:51:20.002158 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:20 crc kubenswrapper[4625]: I1202 13:51:20.002442 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:20 crc kubenswrapper[4625]: I1202 13:51:20.067921 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:20 crc kubenswrapper[4625]: I1202 13:51:20.204842 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:20 crc kubenswrapper[4625]: I1202 13:51:20.204910 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:20 crc kubenswrapper[4625]: I1202 13:51:20.346636 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="90afaf702b407cb71259af2ad7b1c5b8d7e4cfd9bc8d832ed3732d63ee2b7839" exitCode=0 Dec 02 13:51:20 crc kubenswrapper[4625]: I1202 13:51:20.347560 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"90afaf702b407cb71259af2ad7b1c5b8d7e4cfd9bc8d832ed3732d63ee2b7839"} Dec 02 13:51:20 crc kubenswrapper[4625]: I1202 13:51:20.347602 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"498c40948997cf435dda7f03aba2bbba840fadd308257b759a0a93eec21ef79e"} Dec 02 13:51:20 crc kubenswrapper[4625]: I1202 13:51:20.347643 4625 scope.go:117] "RemoveContainer" containerID="1edb5b3bdc215aca4a53477e910396b8647fe8831d00aa88a84b89e6375bd1c2" Dec 02 13:51:20 crc kubenswrapper[4625]: I1202 13:51:20.400150 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d27sv" Dec 02 13:51:21 crc kubenswrapper[4625]: I1202 13:51:21.242529 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sc4nh" podUID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerName="registry-server" probeResult="failure" output=< Dec 02 13:51:21 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 13:51:21 crc kubenswrapper[4625]: > Dec 02 13:51:30 crc kubenswrapper[4625]: I1202 13:51:30.246331 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:30 crc kubenswrapper[4625]: I1202 13:51:30.295127 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 13:51:30 crc kubenswrapper[4625]: I1202 13:51:30.839547 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" podUID="c87d97fb-8391-4f0f-8b3d-a404721de262" containerName="registry" containerID="cri-o://5159b10f6b186f7766c36a0d7d1ed189b9bce04e91d82ca3cfad8dbb013d2db4" gracePeriod=30 Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.419996 4625 generic.go:334] "Generic (PLEG): container finished" podID="c87d97fb-8391-4f0f-8b3d-a404721de262" containerID="5159b10f6b186f7766c36a0d7d1ed189b9bce04e91d82ca3cfad8dbb013d2db4" exitCode=0 Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.420065 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" event={"ID":"c87d97fb-8391-4f0f-8b3d-a404721de262","Type":"ContainerDied","Data":"5159b10f6b186f7766c36a0d7d1ed189b9bce04e91d82ca3cfad8dbb013d2db4"} Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.778817 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.961196 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpsks\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-kube-api-access-wpsks\") pod \"c87d97fb-8391-4f0f-8b3d-a404721de262\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.961571 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c87d97fb-8391-4f0f-8b3d-a404721de262\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.961630 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-bound-sa-token\") pod \"c87d97fb-8391-4f0f-8b3d-a404721de262\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.961668 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-trusted-ca\") pod \"c87d97fb-8391-4f0f-8b3d-a404721de262\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.961694 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-tls\") pod \"c87d97fb-8391-4f0f-8b3d-a404721de262\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.961740 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c87d97fb-8391-4f0f-8b3d-a404721de262-ca-trust-extracted\") pod \"c87d97fb-8391-4f0f-8b3d-a404721de262\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.961807 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c87d97fb-8391-4f0f-8b3d-a404721de262-installation-pull-secrets\") pod \"c87d97fb-8391-4f0f-8b3d-a404721de262\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.961843 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-certificates\") pod \"c87d97fb-8391-4f0f-8b3d-a404721de262\" (UID: \"c87d97fb-8391-4f0f-8b3d-a404721de262\") " Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.963973 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c87d97fb-8391-4f0f-8b3d-a404721de262" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.965290 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c87d97fb-8391-4f0f-8b3d-a404721de262" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.976196 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87d97fb-8391-4f0f-8b3d-a404721de262-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c87d97fb-8391-4f0f-8b3d-a404721de262" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.976834 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-kube-api-access-wpsks" (OuterVolumeSpecName: "kube-api-access-wpsks") pod "c87d97fb-8391-4f0f-8b3d-a404721de262" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262"). InnerVolumeSpecName "kube-api-access-wpsks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.977120 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c87d97fb-8391-4f0f-8b3d-a404721de262" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.977505 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c87d97fb-8391-4f0f-8b3d-a404721de262" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.979619 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c87d97fb-8391-4f0f-8b3d-a404721de262" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:51:31 crc kubenswrapper[4625]: I1202 13:51:31.985838 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c87d97fb-8391-4f0f-8b3d-a404721de262-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c87d97fb-8391-4f0f-8b3d-a404721de262" (UID: "c87d97fb-8391-4f0f-8b3d-a404721de262"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.063542 4625 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.063590 4625 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.063606 4625 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c87d97fb-8391-4f0f-8b3d-a404721de262-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.063622 4625 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c87d97fb-8391-4f0f-8b3d-a404721de262-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.063634 4625 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c87d97fb-8391-4f0f-8b3d-a404721de262-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.063645 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpsks\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-kube-api-access-wpsks\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.063654 4625 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c87d97fb-8391-4f0f-8b3d-a404721de262-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.427499 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" event={"ID":"c87d97fb-8391-4f0f-8b3d-a404721de262","Type":"ContainerDied","Data":"91968f780d930fe60b855a2569264c6b072b4fb2d225c09d665488692898d94f"} Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.427581 4625 scope.go:117] "RemoveContainer" containerID="5159b10f6b186f7766c36a0d7d1ed189b9bce04e91d82ca3cfad8dbb013d2db4" Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.427589 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sc4p7" Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.469772 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sc4p7"] Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.474117 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sc4p7"] Dec 02 13:51:32 crc kubenswrapper[4625]: I1202 13:51:32.864249 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c87d97fb-8391-4f0f-8b3d-a404721de262" path="/var/lib/kubelet/pods/c87d97fb-8391-4f0f-8b3d-a404721de262/volumes" Dec 02 13:53:49 crc kubenswrapper[4625]: I1202 13:53:49.271633 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:53:49 crc kubenswrapper[4625]: I1202 13:53:49.272584 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:54:19 crc kubenswrapper[4625]: I1202 13:54:19.271948 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:54:19 crc kubenswrapper[4625]: I1202 13:54:19.272506 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:54:49 crc kubenswrapper[4625]: I1202 13:54:49.271572 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:54:49 crc kubenswrapper[4625]: I1202 13:54:49.272080 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:54:49 crc kubenswrapper[4625]: I1202 13:54:49.272130 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:54:49 crc kubenswrapper[4625]: I1202 13:54:49.272651 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"498c40948997cf435dda7f03aba2bbba840fadd308257b759a0a93eec21ef79e"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 13:54:49 crc kubenswrapper[4625]: I1202 13:54:49.272698 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://498c40948997cf435dda7f03aba2bbba840fadd308257b759a0a93eec21ef79e" gracePeriod=600 Dec 02 13:54:49 crc kubenswrapper[4625]: I1202 13:54:49.744850 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="498c40948997cf435dda7f03aba2bbba840fadd308257b759a0a93eec21ef79e" exitCode=0 Dec 02 13:54:49 crc kubenswrapper[4625]: I1202 13:54:49.744929 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"498c40948997cf435dda7f03aba2bbba840fadd308257b759a0a93eec21ef79e"} Dec 02 13:54:49 crc kubenswrapper[4625]: I1202 13:54:49.745684 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"c1d575805cab2283b92f1a4e7b510b132f2ba9784cf488248063f8b6d7df5e2f"} Dec 02 13:54:49 crc kubenswrapper[4625]: I1202 13:54:49.745716 4625 scope.go:117] "RemoveContainer" containerID="90afaf702b407cb71259af2ad7b1c5b8d7e4cfd9bc8d832ed3732d63ee2b7839" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.947623 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-wtcwd"] Dec 02 13:56:44 crc kubenswrapper[4625]: E1202 13:56:44.948778 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87d97fb-8391-4f0f-8b3d-a404721de262" containerName="registry" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.948797 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87d97fb-8391-4f0f-8b3d-a404721de262" containerName="registry" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.948923 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87d97fb-8391-4f0f-8b3d-a404721de262" containerName="registry" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.949468 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-wtcwd" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.958451 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mh5c\" (UniqueName: \"kubernetes.io/projected/1a51a454-f4a2-4fac-9d2b-121515d4dcac-kube-api-access-7mh5c\") pod \"cert-manager-cainjector-7f985d654d-wtcwd\" (UID: \"1a51a454-f4a2-4fac-9d2b-121515d4dcac\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-wtcwd" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.978737 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-6svm9"] Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.979829 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-6svm9" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.980541 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.984774 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.985167 4625 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qbnbd" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.985332 4625 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7pqbt" Dec 02 13:56:44 crc kubenswrapper[4625]: I1202 13:56:44.989294 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-wtcwd"] Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.106449 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-6svm9"] Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.111808 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mh5c\" (UniqueName: \"kubernetes.io/projected/1a51a454-f4a2-4fac-9d2b-121515d4dcac-kube-api-access-7mh5c\") pod \"cert-manager-cainjector-7f985d654d-wtcwd\" (UID: \"1a51a454-f4a2-4fac-9d2b-121515d4dcac\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-wtcwd" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.178654 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-cxgzh"] Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.179827 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-cxgzh" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.184278 4625 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bwgkd" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.251081 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mh5c\" (UniqueName: \"kubernetes.io/projected/1a51a454-f4a2-4fac-9d2b-121515d4dcac-kube-api-access-7mh5c\") pod \"cert-manager-cainjector-7f985d654d-wtcwd\" (UID: \"1a51a454-f4a2-4fac-9d2b-121515d4dcac\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-wtcwd" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.251994 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5wcv\" (UniqueName: \"kubernetes.io/projected/9143a513-bf1b-4452-bf75-f5fea106cda0-kube-api-access-m5wcv\") pod \"cert-manager-5b446d88c5-6svm9\" (UID: \"9143a513-bf1b-4452-bf75-f5fea106cda0\") " pod="cert-manager/cert-manager-5b446d88c5-6svm9" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.253032 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-cxgzh"] Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.273703 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-wtcwd" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.353208 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5wcv\" (UniqueName: \"kubernetes.io/projected/9143a513-bf1b-4452-bf75-f5fea106cda0-kube-api-access-m5wcv\") pod \"cert-manager-5b446d88c5-6svm9\" (UID: \"9143a513-bf1b-4452-bf75-f5fea106cda0\") " pod="cert-manager/cert-manager-5b446d88c5-6svm9" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.353573 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvcf\" (UniqueName: \"kubernetes.io/projected/aeb44bf0-3409-493e-9666-17615ae63452-kube-api-access-6rvcf\") pod \"cert-manager-webhook-5655c58dd6-cxgzh\" (UID: \"aeb44bf0-3409-493e-9666-17615ae63452\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-cxgzh" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.404576 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5wcv\" (UniqueName: \"kubernetes.io/projected/9143a513-bf1b-4452-bf75-f5fea106cda0-kube-api-access-m5wcv\") pod \"cert-manager-5b446d88c5-6svm9\" (UID: \"9143a513-bf1b-4452-bf75-f5fea106cda0\") " pod="cert-manager/cert-manager-5b446d88c5-6svm9" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.454843 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvcf\" (UniqueName: \"kubernetes.io/projected/aeb44bf0-3409-493e-9666-17615ae63452-kube-api-access-6rvcf\") pod \"cert-manager-webhook-5655c58dd6-cxgzh\" (UID: \"aeb44bf0-3409-493e-9666-17615ae63452\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-cxgzh" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.484099 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvcf\" (UniqueName: \"kubernetes.io/projected/aeb44bf0-3409-493e-9666-17615ae63452-kube-api-access-6rvcf\") pod \"cert-manager-webhook-5655c58dd6-cxgzh\" (UID: \"aeb44bf0-3409-493e-9666-17615ae63452\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-cxgzh" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.553688 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-cxgzh" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.663235 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-wtcwd"] Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.670932 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.697011 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-6svm9" Dec 02 13:56:45 crc kubenswrapper[4625]: I1202 13:56:45.867834 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-cxgzh"] Dec 02 13:56:46 crc kubenswrapper[4625]: I1202 13:56:46.058614 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-6svm9"] Dec 02 13:56:46 crc kubenswrapper[4625]: W1202 13:56:46.067680 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9143a513_bf1b_4452_bf75_f5fea106cda0.slice/crio-197831aa6de90098bd66e7c871cd9eb33a3a919551d16de512508236e5a33499 WatchSource:0}: Error finding container 197831aa6de90098bd66e7c871cd9eb33a3a919551d16de512508236e5a33499: Status 404 returned error can't find the container with id 197831aa6de90098bd66e7c871cd9eb33a3a919551d16de512508236e5a33499 Dec 02 13:56:46 crc kubenswrapper[4625]: I1202 13:56:46.664151 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-6svm9" event={"ID":"9143a513-bf1b-4452-bf75-f5fea106cda0","Type":"ContainerStarted","Data":"197831aa6de90098bd66e7c871cd9eb33a3a919551d16de512508236e5a33499"} Dec 02 13:56:46 crc kubenswrapper[4625]: I1202 13:56:46.665480 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-cxgzh" event={"ID":"aeb44bf0-3409-493e-9666-17615ae63452","Type":"ContainerStarted","Data":"1a65042f2edcc92246212eb277e5586cb9e7cc4ee9f58af7a626207e76271fbb"} Dec 02 13:56:46 crc kubenswrapper[4625]: I1202 13:56:46.666620 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-wtcwd" event={"ID":"1a51a454-f4a2-4fac-9d2b-121515d4dcac","Type":"ContainerStarted","Data":"76abc66dffd5c633b6f52471f72c89c0520eb1e8bdecfcf31dbc6fb4cb1d1f3c"} Dec 02 13:56:49 crc kubenswrapper[4625]: I1202 13:56:49.271921 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:56:49 crc kubenswrapper[4625]: I1202 13:56:49.272604 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:56:49 crc kubenswrapper[4625]: I1202 13:56:49.693800 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-cxgzh" event={"ID":"aeb44bf0-3409-493e-9666-17615ae63452","Type":"ContainerStarted","Data":"29eb3dd865c57726151d7c679122c624189de583764c9433dbf063c6ad33f68f"} Dec 02 13:56:49 crc kubenswrapper[4625]: I1202 13:56:49.694417 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-cxgzh" Dec 02 13:56:49 crc kubenswrapper[4625]: I1202 13:56:49.718037 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-cxgzh" podStartSLOduration=2.344859768 podStartE2EDuration="5.71801741s" podCreationTimestamp="2025-12-02 13:56:44 +0000 UTC" firstStartedPulling="2025-12-02 13:56:45.880184326 +0000 UTC m=+761.842361401" lastFinishedPulling="2025-12-02 13:56:49.253341968 +0000 UTC m=+765.215519043" observedRunningTime="2025-12-02 13:56:49.710453313 +0000 UTC m=+765.672630388" watchObservedRunningTime="2025-12-02 13:56:49.71801741 +0000 UTC m=+765.680194485" Dec 02 13:56:50 crc kubenswrapper[4625]: I1202 13:56:50.701586 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-wtcwd" event={"ID":"1a51a454-f4a2-4fac-9d2b-121515d4dcac","Type":"ContainerStarted","Data":"b5b169e5edecc54893e0ae62a99d2f256790c3f204a1ddf41d235f4ca6d885b6"} Dec 02 13:56:50 crc kubenswrapper[4625]: I1202 13:56:50.703632 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-6svm9" event={"ID":"9143a513-bf1b-4452-bf75-f5fea106cda0","Type":"ContainerStarted","Data":"1c89e094c3307a4806718ed4d26bb2bc6326563198160601f26d27de8a14b45f"} Dec 02 13:56:50 crc kubenswrapper[4625]: I1202 13:56:50.731794 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-wtcwd" podStartSLOduration=2.389329516 podStartE2EDuration="6.731766177s" podCreationTimestamp="2025-12-02 13:56:44 +0000 UTC" firstStartedPulling="2025-12-02 13:56:45.670718103 +0000 UTC m=+761.632895178" lastFinishedPulling="2025-12-02 13:56:50.013154764 +0000 UTC m=+765.975331839" observedRunningTime="2025-12-02 13:56:50.725697963 +0000 UTC m=+766.687875038" watchObservedRunningTime="2025-12-02 13:56:50.731766177 +0000 UTC m=+766.693943242" Dec 02 13:56:50 crc kubenswrapper[4625]: I1202 13:56:50.753438 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-6svm9" podStartSLOduration=2.7311942609999997 podStartE2EDuration="6.753411909s" podCreationTimestamp="2025-12-02 13:56:44 +0000 UTC" firstStartedPulling="2025-12-02 13:56:46.070014177 +0000 UTC m=+762.032191252" lastFinishedPulling="2025-12-02 13:56:50.092231825 +0000 UTC m=+766.054408900" observedRunningTime="2025-12-02 13:56:50.750967899 +0000 UTC m=+766.713144994" watchObservedRunningTime="2025-12-02 13:56:50.753411909 +0000 UTC m=+766.715588974" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.110365 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lslqf"] Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.111527 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="nbdb" containerID="cri-o://0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7" gracePeriod=30 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.111623 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f" gracePeriod=30 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.111863 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="northd" containerID="cri-o://6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb" gracePeriod=30 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.111990 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="sbdb" containerID="cri-o://30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87" gracePeriod=30 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.112159 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="kube-rbac-proxy-node" containerID="cri-o://d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766" gracePeriod=30 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.112526 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovn-acl-logging" containerID="cri-o://c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35" gracePeriod=30 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.112811 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovn-controller" containerID="cri-o://a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d" gracePeriod=30 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.296171 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" containerID="cri-o://3dde4af5126141af55c57d1dcd42a8a0e5dbbabeec143623d2b05abe1c27097c" gracePeriod=30 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.559163 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-cxgzh" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.748588 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovnkube-controller/3.log" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.751432 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovn-acl-logging/0.log" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752086 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovn-controller/0.log" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752561 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="3dde4af5126141af55c57d1dcd42a8a0e5dbbabeec143623d2b05abe1c27097c" exitCode=0 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752585 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87" exitCode=0 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752594 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7" exitCode=0 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752605 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb" exitCode=0 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752613 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f" exitCode=0 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752620 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766" exitCode=0 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752629 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35" exitCode=143 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752638 4625 generic.go:334] "Generic (PLEG): container finished" podID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerID="a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d" exitCode=143 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752632 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"3dde4af5126141af55c57d1dcd42a8a0e5dbbabeec143623d2b05abe1c27097c"} Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752720 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87"} Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752738 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7"} Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752764 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb"} Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752778 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f"} Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752788 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766"} Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752806 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35"} Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752816 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d"} Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.752790 4625 scope.go:117] "RemoveContainer" containerID="0eea36a9d142bc84b976480b3d8bad9fe3e55bdd9a0946fb688feccfa7eae861" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.757744 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lnf62_dd11bfd3-e3e2-47ac-8354-30dd684045dc/kube-multus/2.log" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.758445 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lnf62_dd11bfd3-e3e2-47ac-8354-30dd684045dc/kube-multus/1.log" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.758554 4625 generic.go:334] "Generic (PLEG): container finished" podID="dd11bfd3-e3e2-47ac-8354-30dd684045dc" containerID="63837bcbbf75cee360705ae64aca4a3b57f1b70420077e4997b6cce891c61050" exitCode=2 Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.758651 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lnf62" event={"ID":"dd11bfd3-e3e2-47ac-8354-30dd684045dc","Type":"ContainerDied","Data":"63837bcbbf75cee360705ae64aca4a3b57f1b70420077e4997b6cce891c61050"} Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.759264 4625 scope.go:117] "RemoveContainer" containerID="63837bcbbf75cee360705ae64aca4a3b57f1b70420077e4997b6cce891c61050" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.798418 4625 scope.go:117] "RemoveContainer" containerID="507ce7f93493157eaee11509f975c22a655957ec9c0e48169d075f4eb3a301ef" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.946797 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovn-acl-logging/0.log" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.948074 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovn-controller/0.log" Dec 02 13:56:55 crc kubenswrapper[4625]: I1202 13:56:55.948773 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.019879 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f7cnf"] Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020150 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="nbdb" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020165 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="nbdb" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020173 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020182 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020192 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020199 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020214 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovn-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020220 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovn-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020233 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovn-acl-logging" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020239 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovn-acl-logging" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020247 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020253 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020260 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="sbdb" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020267 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="sbdb" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020276 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="kube-rbac-proxy-node" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020283 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="kube-rbac-proxy-node" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020293 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="northd" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020300 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="northd" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020333 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="kubecfg-setup" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020340 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="kubecfg-setup" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020349 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020354 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020447 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020459 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovn-acl-logging" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020469 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020475 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="kube-rbac-proxy-ovn-metrics" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020483 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="northd" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020492 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="sbdb" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020499 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovn-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020507 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="nbdb" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020517 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020525 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="kube-rbac-proxy-node" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020534 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020625 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020633 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: E1202 13:56:56.020642 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020647 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.020746 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" containerName="ovnkube-controller" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.022508 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145135 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-script-lib\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145247 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-var-lib-cni-networks-ovn-kubernetes\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145280 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-config\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145346 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9tnx\" (UniqueName: \"kubernetes.io/projected/df437b8d-61b5-41ea-8f56-d5472e444b23-kube-api-access-b9tnx\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145396 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-env-overrides\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145419 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-netd\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145447 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-systemd-units\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145480 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-openvswitch\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145530 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-kubelet\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145558 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df437b8d-61b5-41ea-8f56-d5472e444b23-ovn-node-metrics-cert\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145580 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-etc-openvswitch\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145616 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-ovn\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145648 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-node-log\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145664 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-slash\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145683 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-var-lib-openvswitch\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145702 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-bin\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145721 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-systemd\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145745 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-ovn-kubernetes\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145777 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-netns\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.145814 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-log-socket\") pod \"df437b8d-61b5-41ea-8f56-d5472e444b23\" (UID: \"df437b8d-61b5-41ea-8f56-d5472e444b23\") " Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146016 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-kubelet\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146050 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-systemd-units\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146068 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f4e669-0311-4aa7-b434-93c7c03062a6-ovn-node-metrics-cert\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146097 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-slash\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146115 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-etc-openvswitch\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146136 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-cni-bin\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146158 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-var-lib-openvswitch\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146180 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52f4e669-0311-4aa7-b434-93c7c03062a6-ovnkube-config\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146198 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x5px\" (UniqueName: \"kubernetes.io/projected/52f4e669-0311-4aa7-b434-93c7c03062a6-kube-api-access-2x5px\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146221 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146247 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-run-systemd\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146275 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-run-openvswitch\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146297 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146347 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-run-netns\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146364 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-log-socket\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146387 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-run-ovn\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146440 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52f4e669-0311-4aa7-b434-93c7c03062a6-env-overrides\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146466 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52f4e669-0311-4aa7-b434-93c7c03062a6-ovnkube-script-lib\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146489 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-cni-netd\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146508 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-node-log\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146628 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-log-socket" (OuterVolumeSpecName: "log-socket") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146705 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146730 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146753 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-node-log" (OuterVolumeSpecName: "node-log") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146776 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-slash" (OuterVolumeSpecName: "host-slash") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146796 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.146814 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.148124 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.148200 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.148151 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.148286 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.148335 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.148443 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.148497 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.148549 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.149021 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.149261 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.153549 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df437b8d-61b5-41ea-8f56-d5472e444b23-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.154165 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df437b8d-61b5-41ea-8f56-d5472e444b23-kube-api-access-b9tnx" (OuterVolumeSpecName: "kube-api-access-b9tnx") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "kube-api-access-b9tnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.164082 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "df437b8d-61b5-41ea-8f56-d5472e444b23" (UID: "df437b8d-61b5-41ea-8f56-d5472e444b23"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.251547 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-cni-bin\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.251687 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-var-lib-openvswitch\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.251737 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52f4e669-0311-4aa7-b434-93c7c03062a6-ovnkube-config\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.251779 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x5px\" (UniqueName: \"kubernetes.io/projected/52f4e669-0311-4aa7-b434-93c7c03062a6-kube-api-access-2x5px\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252015 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252076 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-run-systemd\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252169 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-run-openvswitch\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252223 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252298 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-run-netns\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252493 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-log-socket\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252548 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-run-ovn\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252597 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52f4e669-0311-4aa7-b434-93c7c03062a6-env-overrides\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252648 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52f4e669-0311-4aa7-b434-93c7c03062a6-ovnkube-script-lib\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252717 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-node-log\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252752 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-cni-netd\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252795 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-kubelet\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252861 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-systemd-units\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252905 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f4e669-0311-4aa7-b434-93c7c03062a6-ovn-node-metrics-cert\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.252980 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-slash\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253026 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-etc-openvswitch\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253128 4625 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253155 4625 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253187 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9tnx\" (UniqueName: \"kubernetes.io/projected/df437b8d-61b5-41ea-8f56-d5472e444b23-kube-api-access-b9tnx\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253212 4625 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253231 4625 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253252 4625 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253264 4625 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253285 4625 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253322 4625 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df437b8d-61b5-41ea-8f56-d5472e444b23-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253344 4625 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253362 4625 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253381 4625 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-node-log\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253393 4625 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-slash\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253413 4625 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253434 4625 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253454 4625 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253481 4625 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253502 4625 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253522 4625 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/df437b8d-61b5-41ea-8f56-d5472e444b23-log-socket\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253534 4625 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/df437b8d-61b5-41ea-8f56-d5472e444b23-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253633 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-etc-openvswitch\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253711 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-cni-bin\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253752 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-run-ovn\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253809 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-kubelet\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253860 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-systemd-units\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253866 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-log-socket\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.253886 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-cni-netd\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.254075 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-var-lib-openvswitch\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.254433 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-run-systemd\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.255113 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52f4e669-0311-4aa7-b434-93c7c03062a6-ovnkube-config\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.254939 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-node-log\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.254985 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.255027 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-slash\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.255045 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.255071 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-run-openvswitch\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.255081 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52f4e669-0311-4aa7-b434-93c7c03062a6-host-run-netns\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.254875 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52f4e669-0311-4aa7-b434-93c7c03062a6-env-overrides\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.256215 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52f4e669-0311-4aa7-b434-93c7c03062a6-ovnkube-script-lib\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.257949 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52f4e669-0311-4aa7-b434-93c7c03062a6-ovn-node-metrics-cert\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.285189 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x5px\" (UniqueName: \"kubernetes.io/projected/52f4e669-0311-4aa7-b434-93c7c03062a6-kube-api-access-2x5px\") pod \"ovnkube-node-f7cnf\" (UID: \"52f4e669-0311-4aa7-b434-93c7c03062a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.339897 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:56:56 crc kubenswrapper[4625]: W1202 13:56:56.378913 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52f4e669_0311_4aa7_b434_93c7c03062a6.slice/crio-758bdd18a237681201f62c41ca1323f5517631b1b161e6a82398449dd93a2628 WatchSource:0}: Error finding container 758bdd18a237681201f62c41ca1323f5517631b1b161e6a82398449dd93a2628: Status 404 returned error can't find the container with id 758bdd18a237681201f62c41ca1323f5517631b1b161e6a82398449dd93a2628 Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.768086 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lnf62_dd11bfd3-e3e2-47ac-8354-30dd684045dc/kube-multus/2.log" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.768637 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lnf62" event={"ID":"dd11bfd3-e3e2-47ac-8354-30dd684045dc","Type":"ContainerStarted","Data":"5c96db94482aeceea87b307f6b5fbeef5145981a2e750cce8df21194387ea4a0"} Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.781623 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovn-acl-logging/0.log" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.782649 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lslqf_df437b8d-61b5-41ea-8f56-d5472e444b23/ovn-controller/0.log" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.784837 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" event={"ID":"df437b8d-61b5-41ea-8f56-d5472e444b23","Type":"ContainerDied","Data":"c3081f6a7ebaa0ab2558faa495ca2f234dc502dd123503856cded86dbf775bb4"} Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.784900 4625 scope.go:117] "RemoveContainer" containerID="3dde4af5126141af55c57d1dcd42a8a0e5dbbabeec143623d2b05abe1c27097c" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.785184 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lslqf" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.789756 4625 generic.go:334] "Generic (PLEG): container finished" podID="52f4e669-0311-4aa7-b434-93c7c03062a6" containerID="2f36a9499ac5445a5e8510274e5f19bd9a21c66c8ff7fadaf9cc518b79aabbd6" exitCode=0 Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.789846 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" event={"ID":"52f4e669-0311-4aa7-b434-93c7c03062a6","Type":"ContainerDied","Data":"2f36a9499ac5445a5e8510274e5f19bd9a21c66c8ff7fadaf9cc518b79aabbd6"} Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.789884 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" event={"ID":"52f4e669-0311-4aa7-b434-93c7c03062a6","Type":"ContainerStarted","Data":"758bdd18a237681201f62c41ca1323f5517631b1b161e6a82398449dd93a2628"} Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.805530 4625 scope.go:117] "RemoveContainer" containerID="30ce685e34c41448e4819587ea4adf86a50f3d0cf6abea9dcdd9445cd63f0c87" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.844515 4625 scope.go:117] "RemoveContainer" containerID="0dbbc53711b42a8bccf1eae0c35909e6ad30d6fd4ed2720640c5481731f2d7f7" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.873643 4625 scope.go:117] "RemoveContainer" containerID="6cac5092ca2ab788491696a2110bbbb9368311df38af3dac2659173903b432cb" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.882558 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lslqf"] Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.887263 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lslqf"] Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.902666 4625 scope.go:117] "RemoveContainer" containerID="350b9c45b742852139d9390fc26d9dfe03bf11914e5766d621dd9e20dcbee62f" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.919044 4625 scope.go:117] "RemoveContainer" containerID="d106e11355f785952c823972dcda5c9330b474a1434341c9e8daa94993835766" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.937058 4625 scope.go:117] "RemoveContainer" containerID="c1255f8ce3b93895b13656fe84db135a41bb4cdd1b85de3d4fb00ab6a12fda35" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.959901 4625 scope.go:117] "RemoveContainer" containerID="a9220363b8c13792a16142dc0f28f5d0148dbd1dc309a06c9de13fba64878f2d" Dec 02 13:56:56 crc kubenswrapper[4625]: I1202 13:56:56.987888 4625 scope.go:117] "RemoveContainer" containerID="b3c3c3eeafa6191fda7665ff182b53c56f11ee896f0fcf774cc4cf941b924756" Dec 02 13:56:57 crc kubenswrapper[4625]: I1202 13:56:57.801968 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" event={"ID":"52f4e669-0311-4aa7-b434-93c7c03062a6","Type":"ContainerStarted","Data":"e225b3292a28396aa8cddf4324329368e16e6b36e8a35e2d4ba4ba202a8c056d"} Dec 02 13:56:57 crc kubenswrapper[4625]: I1202 13:56:57.802575 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" event={"ID":"52f4e669-0311-4aa7-b434-93c7c03062a6","Type":"ContainerStarted","Data":"96ed71404c338ff2617d058f853e4271fdfe5f86519ef9761cd0a2b06baeeaed"} Dec 02 13:56:57 crc kubenswrapper[4625]: I1202 13:56:57.802597 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" event={"ID":"52f4e669-0311-4aa7-b434-93c7c03062a6","Type":"ContainerStarted","Data":"d5b203ae0484967ac2177c93d14cee775344099be4c4c233fb97b6a9a6b96ba4"} Dec 02 13:56:57 crc kubenswrapper[4625]: I1202 13:56:57.802610 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" event={"ID":"52f4e669-0311-4aa7-b434-93c7c03062a6","Type":"ContainerStarted","Data":"f9be1c3d3bd609557307e3688d047e2f07b7cc95c9d2f3313df768da667fd6dc"} Dec 02 13:56:57 crc kubenswrapper[4625]: I1202 13:56:57.802622 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" event={"ID":"52f4e669-0311-4aa7-b434-93c7c03062a6","Type":"ContainerStarted","Data":"2464d6eb78f60b92e38ba4980823becb18aec6f5a321fd427937a75052624724"} Dec 02 13:56:57 crc kubenswrapper[4625]: I1202 13:56:57.802634 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" event={"ID":"52f4e669-0311-4aa7-b434-93c7c03062a6","Type":"ContainerStarted","Data":"81c36b2e56837a5932a06f170c3aac68fb14fe5f38e042f9443ff4ed9bd8f15a"} Dec 02 13:56:58 crc kubenswrapper[4625]: I1202 13:56:58.864768 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df437b8d-61b5-41ea-8f56-d5472e444b23" path="/var/lib/kubelet/pods/df437b8d-61b5-41ea-8f56-d5472e444b23/volumes" Dec 02 13:56:59 crc kubenswrapper[4625]: I1202 13:56:59.818407 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" event={"ID":"52f4e669-0311-4aa7-b434-93c7c03062a6","Type":"ContainerStarted","Data":"e79aa0d9f4194df11e32b484d499e53692fb500bc80180a5877aba9e9f46860c"} Dec 02 13:57:02 crc kubenswrapper[4625]: I1202 13:57:02.844611 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" event={"ID":"52f4e669-0311-4aa7-b434-93c7c03062a6","Type":"ContainerStarted","Data":"66aacae9c9804a857e2f84caf8a16e122e60e64a920973d380fbe424da083726"} Dec 02 13:57:03 crc kubenswrapper[4625]: I1202 13:57:03.851572 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:57:03 crc kubenswrapper[4625]: I1202 13:57:03.851656 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:57:03 crc kubenswrapper[4625]: I1202 13:57:03.851826 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:57:03 crc kubenswrapper[4625]: I1202 13:57:03.889017 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:57:03 crc kubenswrapper[4625]: I1202 13:57:03.901119 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" podStartSLOduration=8.901094263 podStartE2EDuration="8.901094263s" podCreationTimestamp="2025-12-02 13:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:57:03.890681359 +0000 UTC m=+779.852858454" watchObservedRunningTime="2025-12-02 13:57:03.901094263 +0000 UTC m=+779.863271338" Dec 02 13:57:03 crc kubenswrapper[4625]: I1202 13:57:03.903899 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:57:05 crc kubenswrapper[4625]: I1202 13:57:05.556169 4625 scope.go:117] "RemoveContainer" containerID="c30c2cc3a28d10eb5d40d84b9227cde60defa010a383f3e7715c98e7c2fa6913" Dec 02 13:57:05 crc kubenswrapper[4625]: I1202 13:57:05.839902 4625 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 02 13:57:19 crc kubenswrapper[4625]: I1202 13:57:19.271529 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:57:19 crc kubenswrapper[4625]: I1202 13:57:19.272509 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:57:26 crc kubenswrapper[4625]: I1202 13:57:26.371425 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f7cnf" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.540813 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm"] Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.542793 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.550974 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm"] Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.551716 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.580891 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78ml\" (UniqueName: \"kubernetes.io/projected/6ba709b5-16eb-458a-a3ca-8d430acaf634-kube-api-access-g78ml\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.580971 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.581006 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.682062 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.682156 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78ml\" (UniqueName: \"kubernetes.io/projected/6ba709b5-16eb-458a-a3ca-8d430acaf634-kube-api-access-g78ml\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.682194 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.682713 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.682711 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.706435 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78ml\" (UniqueName: \"kubernetes.io/projected/6ba709b5-16eb-458a-a3ca-8d430acaf634-kube-api-access-g78ml\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:37 crc kubenswrapper[4625]: I1202 13:57:37.860916 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:38 crc kubenswrapper[4625]: I1202 13:57:38.127910 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm"] Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.085260 4625 generic.go:334] "Generic (PLEG): container finished" podID="6ba709b5-16eb-458a-a3ca-8d430acaf634" containerID="42a8c5eedba7a3bc548f4fafb2b902dae86434d4e35e1bd18743c55a033c3afe" exitCode=0 Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.085351 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" event={"ID":"6ba709b5-16eb-458a-a3ca-8d430acaf634","Type":"ContainerDied","Data":"42a8c5eedba7a3bc548f4fafb2b902dae86434d4e35e1bd18743c55a033c3afe"} Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.086640 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" event={"ID":"6ba709b5-16eb-458a-a3ca-8d430acaf634","Type":"ContainerStarted","Data":"4e339e799cff885880f938dbd3bf48508585262ea9b74db8cf2728098997c958"} Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.615896 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8vx6q"] Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.617302 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.631050 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vx6q"] Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.767083 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-catalog-content\") pod \"redhat-operators-8vx6q\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.767494 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-utilities\") pod \"redhat-operators-8vx6q\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.767619 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgcfg\" (UniqueName: \"kubernetes.io/projected/be69a743-2df9-4547-bd86-d5074ac34e04-kube-api-access-rgcfg\") pod \"redhat-operators-8vx6q\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.868242 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-catalog-content\") pod \"redhat-operators-8vx6q\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.868592 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-utilities\") pod \"redhat-operators-8vx6q\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.868670 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgcfg\" (UniqueName: \"kubernetes.io/projected/be69a743-2df9-4547-bd86-d5074ac34e04-kube-api-access-rgcfg\") pod \"redhat-operators-8vx6q\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.868870 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-catalog-content\") pod \"redhat-operators-8vx6q\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.869407 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-utilities\") pod \"redhat-operators-8vx6q\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.896771 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgcfg\" (UniqueName: \"kubernetes.io/projected/be69a743-2df9-4547-bd86-d5074ac34e04-kube-api-access-rgcfg\") pod \"redhat-operators-8vx6q\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:39 crc kubenswrapper[4625]: I1202 13:57:39.933055 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:40 crc kubenswrapper[4625]: I1202 13:57:40.386203 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vx6q"] Dec 02 13:57:40 crc kubenswrapper[4625]: W1202 13:57:40.396281 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe69a743_2df9_4547_bd86_d5074ac34e04.slice/crio-50c1d12cf67edc89589b3b7d93a501c2e491ba7163c6823a6ff4992e46972196 WatchSource:0}: Error finding container 50c1d12cf67edc89589b3b7d93a501c2e491ba7163c6823a6ff4992e46972196: Status 404 returned error can't find the container with id 50c1d12cf67edc89589b3b7d93a501c2e491ba7163c6823a6ff4992e46972196 Dec 02 13:57:41 crc kubenswrapper[4625]: I1202 13:57:41.196045 4625 generic.go:334] "Generic (PLEG): container finished" podID="be69a743-2df9-4547-bd86-d5074ac34e04" containerID="938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec" exitCode=0 Dec 02 13:57:41 crc kubenswrapper[4625]: I1202 13:57:41.196370 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vx6q" event={"ID":"be69a743-2df9-4547-bd86-d5074ac34e04","Type":"ContainerDied","Data":"938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec"} Dec 02 13:57:41 crc kubenswrapper[4625]: I1202 13:57:41.196410 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vx6q" event={"ID":"be69a743-2df9-4547-bd86-d5074ac34e04","Type":"ContainerStarted","Data":"50c1d12cf67edc89589b3b7d93a501c2e491ba7163c6823a6ff4992e46972196"} Dec 02 13:57:42 crc kubenswrapper[4625]: I1202 13:57:42.205127 4625 generic.go:334] "Generic (PLEG): container finished" podID="6ba709b5-16eb-458a-a3ca-8d430acaf634" containerID="5f8b5309af17041e4fe7c522720edf7a5e0efcf89ad5fddc1f643046f130c7f3" exitCode=0 Dec 02 13:57:42 crc kubenswrapper[4625]: I1202 13:57:42.205195 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" event={"ID":"6ba709b5-16eb-458a-a3ca-8d430acaf634","Type":"ContainerDied","Data":"5f8b5309af17041e4fe7c522720edf7a5e0efcf89ad5fddc1f643046f130c7f3"} Dec 02 13:57:43 crc kubenswrapper[4625]: I1202 13:57:43.214421 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vx6q" event={"ID":"be69a743-2df9-4547-bd86-d5074ac34e04","Type":"ContainerStarted","Data":"7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875"} Dec 02 13:57:43 crc kubenswrapper[4625]: I1202 13:57:43.217454 4625 generic.go:334] "Generic (PLEG): container finished" podID="6ba709b5-16eb-458a-a3ca-8d430acaf634" containerID="b977ecbf16e7645487c13e1e04a82e3cb9ca147cbe9ed9cf4e7d29e97cfdea13" exitCode=0 Dec 02 13:57:43 crc kubenswrapper[4625]: I1202 13:57:43.217496 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" event={"ID":"6ba709b5-16eb-458a-a3ca-8d430acaf634","Type":"ContainerDied","Data":"b977ecbf16e7645487c13e1e04a82e3cb9ca147cbe9ed9cf4e7d29e97cfdea13"} Dec 02 13:57:44 crc kubenswrapper[4625]: I1202 13:57:44.678475 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:44 crc kubenswrapper[4625]: I1202 13:57:44.841121 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g78ml\" (UniqueName: \"kubernetes.io/projected/6ba709b5-16eb-458a-a3ca-8d430acaf634-kube-api-access-g78ml\") pod \"6ba709b5-16eb-458a-a3ca-8d430acaf634\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " Dec 02 13:57:44 crc kubenswrapper[4625]: I1202 13:57:44.841211 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-bundle\") pod \"6ba709b5-16eb-458a-a3ca-8d430acaf634\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " Dec 02 13:57:44 crc kubenswrapper[4625]: I1202 13:57:44.841269 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-util\") pod \"6ba709b5-16eb-458a-a3ca-8d430acaf634\" (UID: \"6ba709b5-16eb-458a-a3ca-8d430acaf634\") " Dec 02 13:57:44 crc kubenswrapper[4625]: I1202 13:57:44.842176 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-bundle" (OuterVolumeSpecName: "bundle") pod "6ba709b5-16eb-458a-a3ca-8d430acaf634" (UID: "6ba709b5-16eb-458a-a3ca-8d430acaf634"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:57:44 crc kubenswrapper[4625]: I1202 13:57:44.849726 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba709b5-16eb-458a-a3ca-8d430acaf634-kube-api-access-g78ml" (OuterVolumeSpecName: "kube-api-access-g78ml") pod "6ba709b5-16eb-458a-a3ca-8d430acaf634" (UID: "6ba709b5-16eb-458a-a3ca-8d430acaf634"). InnerVolumeSpecName "kube-api-access-g78ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:57:44 crc kubenswrapper[4625]: I1202 13:57:44.857625 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-util" (OuterVolumeSpecName: "util") pod "6ba709b5-16eb-458a-a3ca-8d430acaf634" (UID: "6ba709b5-16eb-458a-a3ca-8d430acaf634"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:57:44 crc kubenswrapper[4625]: I1202 13:57:44.942734 4625 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-util\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:44 crc kubenswrapper[4625]: I1202 13:57:44.942781 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g78ml\" (UniqueName: \"kubernetes.io/projected/6ba709b5-16eb-458a-a3ca-8d430acaf634-kube-api-access-g78ml\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:44 crc kubenswrapper[4625]: I1202 13:57:44.942796 4625 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ba709b5-16eb-458a-a3ca-8d430acaf634-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:57:45 crc kubenswrapper[4625]: I1202 13:57:45.243008 4625 generic.go:334] "Generic (PLEG): container finished" podID="be69a743-2df9-4547-bd86-d5074ac34e04" containerID="7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875" exitCode=0 Dec 02 13:57:45 crc kubenswrapper[4625]: I1202 13:57:45.243100 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vx6q" event={"ID":"be69a743-2df9-4547-bd86-d5074ac34e04","Type":"ContainerDied","Data":"7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875"} Dec 02 13:57:45 crc kubenswrapper[4625]: I1202 13:57:45.250755 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" event={"ID":"6ba709b5-16eb-458a-a3ca-8d430acaf634","Type":"ContainerDied","Data":"4e339e799cff885880f938dbd3bf48508585262ea9b74db8cf2728098997c958"} Dec 02 13:57:45 crc kubenswrapper[4625]: I1202 13:57:45.250800 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e339e799cff885880f938dbd3bf48508585262ea9b74db8cf2728098997c958" Dec 02 13:57:45 crc kubenswrapper[4625]: I1202 13:57:45.250857 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm" Dec 02 13:57:46 crc kubenswrapper[4625]: I1202 13:57:46.258509 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vx6q" event={"ID":"be69a743-2df9-4547-bd86-d5074ac34e04","Type":"ContainerStarted","Data":"edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d"} Dec 02 13:57:46 crc kubenswrapper[4625]: I1202 13:57:46.279229 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8vx6q" podStartSLOduration=2.606437895 podStartE2EDuration="7.279208362s" podCreationTimestamp="2025-12-02 13:57:39 +0000 UTC" firstStartedPulling="2025-12-02 13:57:41.198603048 +0000 UTC m=+817.160780123" lastFinishedPulling="2025-12-02 13:57:45.871373515 +0000 UTC m=+821.833550590" observedRunningTime="2025-12-02 13:57:46.275153931 +0000 UTC m=+822.237331016" watchObservedRunningTime="2025-12-02 13:57:46.279208362 +0000 UTC m=+822.241385437" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.166933 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd"] Dec 02 13:57:48 crc kubenswrapper[4625]: E1202 13:57:48.167532 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba709b5-16eb-458a-a3ca-8d430acaf634" containerName="pull" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.167549 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba709b5-16eb-458a-a3ca-8d430acaf634" containerName="pull" Dec 02 13:57:48 crc kubenswrapper[4625]: E1202 13:57:48.167563 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba709b5-16eb-458a-a3ca-8d430acaf634" containerName="extract" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.167572 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba709b5-16eb-458a-a3ca-8d430acaf634" containerName="extract" Dec 02 13:57:48 crc kubenswrapper[4625]: E1202 13:57:48.167583 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba709b5-16eb-458a-a3ca-8d430acaf634" containerName="util" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.167591 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba709b5-16eb-458a-a3ca-8d430acaf634" containerName="util" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.167726 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba709b5-16eb-458a-a3ca-8d430acaf634" containerName="extract" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.168180 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.170286 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-r7sp6" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.170509 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.172745 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.206881 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9tw9\" (UniqueName: \"kubernetes.io/projected/c990b211-885a-4f31-835b-ebc7d42db8dc-kube-api-access-v9tw9\") pod \"nmstate-operator-5b5b58f5c8-7wnsd\" (UID: \"c990b211-885a-4f31-835b-ebc7d42db8dc\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.324341 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9tw9\" (UniqueName: \"kubernetes.io/projected/c990b211-885a-4f31-835b-ebc7d42db8dc-kube-api-access-v9tw9\") pod \"nmstate-operator-5b5b58f5c8-7wnsd\" (UID: \"c990b211-885a-4f31-835b-ebc7d42db8dc\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.370701 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd"] Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.378912 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9tw9\" (UniqueName: \"kubernetes.io/projected/c990b211-885a-4f31-835b-ebc7d42db8dc-kube-api-access-v9tw9\") pod \"nmstate-operator-5b5b58f5c8-7wnsd\" (UID: \"c990b211-885a-4f31-835b-ebc7d42db8dc\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.505621 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd" Dec 02 13:57:48 crc kubenswrapper[4625]: I1202 13:57:48.982643 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd"] Dec 02 13:57:48 crc kubenswrapper[4625]: W1202 13:57:48.987915 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc990b211_885a_4f31_835b_ebc7d42db8dc.slice/crio-477dd498223b506425f24828911c4b9b829f7e88378fde94d5bb56913395980e WatchSource:0}: Error finding container 477dd498223b506425f24828911c4b9b829f7e88378fde94d5bb56913395980e: Status 404 returned error can't find the container with id 477dd498223b506425f24828911c4b9b829f7e88378fde94d5bb56913395980e Dec 02 13:57:49 crc kubenswrapper[4625]: I1202 13:57:49.271809 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 13:57:49 crc kubenswrapper[4625]: I1202 13:57:49.271881 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 13:57:49 crc kubenswrapper[4625]: I1202 13:57:49.271937 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 13:57:49 crc kubenswrapper[4625]: I1202 13:57:49.272732 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1d575805cab2283b92f1a4e7b510b132f2ba9784cf488248063f8b6d7df5e2f"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 13:57:49 crc kubenswrapper[4625]: I1202 13:57:49.272823 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://c1d575805cab2283b92f1a4e7b510b132f2ba9784cf488248063f8b6d7df5e2f" gracePeriod=600 Dec 02 13:57:49 crc kubenswrapper[4625]: I1202 13:57:49.348624 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd" event={"ID":"c990b211-885a-4f31-835b-ebc7d42db8dc","Type":"ContainerStarted","Data":"477dd498223b506425f24828911c4b9b829f7e88378fde94d5bb56913395980e"} Dec 02 13:57:49 crc kubenswrapper[4625]: I1202 13:57:49.933570 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:49 crc kubenswrapper[4625]: I1202 13:57:49.933633 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:57:50 crc kubenswrapper[4625]: I1202 13:57:50.362700 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="c1d575805cab2283b92f1a4e7b510b132f2ba9784cf488248063f8b6d7df5e2f" exitCode=0 Dec 02 13:57:50 crc kubenswrapper[4625]: I1202 13:57:50.362852 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"c1d575805cab2283b92f1a4e7b510b132f2ba9784cf488248063f8b6d7df5e2f"} Dec 02 13:57:50 crc kubenswrapper[4625]: I1202 13:57:50.362993 4625 scope.go:117] "RemoveContainer" containerID="498c40948997cf435dda7f03aba2bbba840fadd308257b759a0a93eec21ef79e" Dec 02 13:57:50 crc kubenswrapper[4625]: I1202 13:57:50.987059 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8vx6q" podUID="be69a743-2df9-4547-bd86-d5074ac34e04" containerName="registry-server" probeResult="failure" output=< Dec 02 13:57:50 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 13:57:50 crc kubenswrapper[4625]: > Dec 02 13:57:51 crc kubenswrapper[4625]: I1202 13:57:51.385026 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"26c37d19f3fe7a2800125178b96518c47f7905764a81c00a7c86f8da62aaaa2f"} Dec 02 13:57:52 crc kubenswrapper[4625]: I1202 13:57:52.393511 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd" event={"ID":"c990b211-885a-4f31-835b-ebc7d42db8dc","Type":"ContainerStarted","Data":"fbc187a11c47186aa1ce3308cdc6d9e554a6cd5e94c8f63e2eaa29ffa75a2c08"} Dec 02 13:57:52 crc kubenswrapper[4625]: I1202 13:57:52.416548 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-7wnsd" podStartSLOduration=1.441191826 podStartE2EDuration="4.416527639s" podCreationTimestamp="2025-12-02 13:57:48 +0000 UTC" firstStartedPulling="2025-12-02 13:57:48.990266261 +0000 UTC m=+824.952443336" lastFinishedPulling="2025-12-02 13:57:51.965602074 +0000 UTC m=+827.927779149" observedRunningTime="2025-12-02 13:57:52.41106841 +0000 UTC m=+828.373245485" watchObservedRunningTime="2025-12-02 13:57:52.416527639 +0000 UTC m=+828.378704704" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.519784 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9"] Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.521179 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.523712 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-tsm82" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.543877 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd"] Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.544755 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.549008 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9"] Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.549093 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.625429 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd"] Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.677253 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2gbcf"] Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.678104 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.680450 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8rb\" (UniqueName: \"kubernetes.io/projected/ec26d36a-6a53-45ea-b678-ff1f2f663e4b-kube-api-access-vd8rb\") pod \"nmstate-metrics-7f946cbc9-n6vs9\" (UID: \"ec26d36a-6a53-45ea-b678-ff1f2f663e4b\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.680608 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/52f9dddb-330a-4c13-9bb7-6a7766b6c4ec-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-27pvd\" (UID: \"52f9dddb-330a-4c13-9bb7-6a7766b6c4ec\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.680749 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwmqn\" (UniqueName: \"kubernetes.io/projected/52f9dddb-330a-4c13-9bb7-6a7766b6c4ec-kube-api-access-bwmqn\") pod \"nmstate-webhook-5f6d4c5ccb-27pvd\" (UID: \"52f9dddb-330a-4c13-9bb7-6a7766b6c4ec\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.764276 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7"] Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.765584 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.769669 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.769935 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zfh6z" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.770060 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.776621 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7"] Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.782090 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c272e6ee-0c53-4601-bb13-b19116b52d78-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-xbmh7\" (UID: \"c272e6ee-0c53-4601-bb13-b19116b52d78\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.782145 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-nmstate-lock\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.782203 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8rb\" (UniqueName: \"kubernetes.io/projected/ec26d36a-6a53-45ea-b678-ff1f2f663e4b-kube-api-access-vd8rb\") pod \"nmstate-metrics-7f946cbc9-n6vs9\" (UID: \"ec26d36a-6a53-45ea-b678-ff1f2f663e4b\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.782274 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-dbus-socket\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.782303 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/52f9dddb-330a-4c13-9bb7-6a7766b6c4ec-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-27pvd\" (UID: \"52f9dddb-330a-4c13-9bb7-6a7766b6c4ec\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.782377 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnss6\" (UniqueName: \"kubernetes.io/projected/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-kube-api-access-mnss6\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.782407 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbtg6\" (UniqueName: \"kubernetes.io/projected/c272e6ee-0c53-4601-bb13-b19116b52d78-kube-api-access-jbtg6\") pod \"nmstate-console-plugin-7fbb5f6569-xbmh7\" (UID: \"c272e6ee-0c53-4601-bb13-b19116b52d78\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.782442 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwmqn\" (UniqueName: \"kubernetes.io/projected/52f9dddb-330a-4c13-9bb7-6a7766b6c4ec-kube-api-access-bwmqn\") pod \"nmstate-webhook-5f6d4c5ccb-27pvd\" (UID: \"52f9dddb-330a-4c13-9bb7-6a7766b6c4ec\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.782475 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c272e6ee-0c53-4601-bb13-b19116b52d78-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-xbmh7\" (UID: \"c272e6ee-0c53-4601-bb13-b19116b52d78\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.782497 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-ovs-socket\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: E1202 13:57:57.782656 4625 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 02 13:57:57 crc kubenswrapper[4625]: E1202 13:57:57.782755 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52f9dddb-330a-4c13-9bb7-6a7766b6c4ec-tls-key-pair podName:52f9dddb-330a-4c13-9bb7-6a7766b6c4ec nodeName:}" failed. No retries permitted until 2025-12-02 13:57:58.282728191 +0000 UTC m=+834.244905306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/52f9dddb-330a-4c13-9bb7-6a7766b6c4ec-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-27pvd" (UID: "52f9dddb-330a-4c13-9bb7-6a7766b6c4ec") : secret "openshift-nmstate-webhook" not found Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.808392 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwmqn\" (UniqueName: \"kubernetes.io/projected/52f9dddb-330a-4c13-9bb7-6a7766b6c4ec-kube-api-access-bwmqn\") pod \"nmstate-webhook-5f6d4c5ccb-27pvd\" (UID: \"52f9dddb-330a-4c13-9bb7-6a7766b6c4ec\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.808578 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8rb\" (UniqueName: \"kubernetes.io/projected/ec26d36a-6a53-45ea-b678-ff1f2f663e4b-kube-api-access-vd8rb\") pod \"nmstate-metrics-7f946cbc9-n6vs9\" (UID: \"ec26d36a-6a53-45ea-b678-ff1f2f663e4b\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.837265 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.884290 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c272e6ee-0c53-4601-bb13-b19116b52d78-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-xbmh7\" (UID: \"c272e6ee-0c53-4601-bb13-b19116b52d78\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.884397 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-nmstate-lock\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.884478 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-dbus-socket\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: E1202 13:57:57.884490 4625 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.884552 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnss6\" (UniqueName: \"kubernetes.io/projected/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-kube-api-access-mnss6\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: E1202 13:57:57.884594 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c272e6ee-0c53-4601-bb13-b19116b52d78-plugin-serving-cert podName:c272e6ee-0c53-4601-bb13-b19116b52d78 nodeName:}" failed. No retries permitted until 2025-12-02 13:57:58.384569235 +0000 UTC m=+834.346746500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/c272e6ee-0c53-4601-bb13-b19116b52d78-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-xbmh7" (UID: "c272e6ee-0c53-4601-bb13-b19116b52d78") : secret "plugin-serving-cert" not found Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.884638 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbtg6\" (UniqueName: \"kubernetes.io/projected/c272e6ee-0c53-4601-bb13-b19116b52d78-kube-api-access-jbtg6\") pod \"nmstate-console-plugin-7fbb5f6569-xbmh7\" (UID: \"c272e6ee-0c53-4601-bb13-b19116b52d78\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.884742 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c272e6ee-0c53-4601-bb13-b19116b52d78-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-xbmh7\" (UID: \"c272e6ee-0c53-4601-bb13-b19116b52d78\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.884773 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-ovs-socket\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.884987 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-ovs-socket\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.885160 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-nmstate-lock\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.885699 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-dbus-socket\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.886934 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c272e6ee-0c53-4601-bb13-b19116b52d78-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-xbmh7\" (UID: \"c272e6ee-0c53-4601-bb13-b19116b52d78\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.916714 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnss6\" (UniqueName: \"kubernetes.io/projected/a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab-kube-api-access-mnss6\") pod \"nmstate-handler-2gbcf\" (UID: \"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab\") " pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:57 crc kubenswrapper[4625]: I1202 13:57:57.925238 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbtg6\" (UniqueName: \"kubernetes.io/projected/c272e6ee-0c53-4601-bb13-b19116b52d78-kube-api-access-jbtg6\") pod \"nmstate-console-plugin-7fbb5f6569-xbmh7\" (UID: \"c272e6ee-0c53-4601-bb13-b19116b52d78\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.001277 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.003615 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74d6c9d747-fdlkr"] Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.004796 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.072481 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74d6c9d747-fdlkr"] Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.090796 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-oauth-serving-cert\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.090852 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7f257bd9-e428-4d4c-9f32-4447bbd373e1-console-oauth-config\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.090902 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-trusted-ca-bundle\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.090934 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-service-ca\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.090987 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2lz9\" (UniqueName: \"kubernetes.io/projected/7f257bd9-e428-4d4c-9f32-4447bbd373e1-kube-api-access-h2lz9\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.091022 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f257bd9-e428-4d4c-9f32-4447bbd373e1-console-serving-cert\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.091060 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-console-config\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.194692 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-console-config\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.194784 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-oauth-serving-cert\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.194813 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7f257bd9-e428-4d4c-9f32-4447bbd373e1-console-oauth-config\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.194852 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-trusted-ca-bundle\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.194894 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-service-ca\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.194932 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2lz9\" (UniqueName: \"kubernetes.io/projected/7f257bd9-e428-4d4c-9f32-4447bbd373e1-kube-api-access-h2lz9\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.194962 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f257bd9-e428-4d4c-9f32-4447bbd373e1-console-serving-cert\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.196529 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-oauth-serving-cert\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.196731 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-service-ca\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.196844 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-console-config\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.198358 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f257bd9-e428-4d4c-9f32-4447bbd373e1-trusted-ca-bundle\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.200558 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7f257bd9-e428-4d4c-9f32-4447bbd373e1-console-oauth-config\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.200990 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f257bd9-e428-4d4c-9f32-4447bbd373e1-console-serving-cert\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.220064 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2lz9\" (UniqueName: \"kubernetes.io/projected/7f257bd9-e428-4d4c-9f32-4447bbd373e1-kube-api-access-h2lz9\") pod \"console-74d6c9d747-fdlkr\" (UID: \"7f257bd9-e428-4d4c-9f32-4447bbd373e1\") " pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.296018 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/52f9dddb-330a-4c13-9bb7-6a7766b6c4ec-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-27pvd\" (UID: \"52f9dddb-330a-4c13-9bb7-6a7766b6c4ec\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.299298 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/52f9dddb-330a-4c13-9bb7-6a7766b6c4ec-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-27pvd\" (UID: \"52f9dddb-330a-4c13-9bb7-6a7766b6c4ec\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.325190 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.396963 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c272e6ee-0c53-4601-bb13-b19116b52d78-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-xbmh7\" (UID: \"c272e6ee-0c53-4601-bb13-b19116b52d78\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.401084 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c272e6ee-0c53-4601-bb13-b19116b52d78-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-xbmh7\" (UID: \"c272e6ee-0c53-4601-bb13-b19116b52d78\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.425602 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2gbcf" event={"ID":"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab","Type":"ContainerStarted","Data":"422bf20cc8274a01e9b52416224fe06a967c424cd1b9c0e915e6fcf1ac625b8f"} Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.438409 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9"] Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.460027 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.607471 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74d6c9d747-fdlkr"] Dec 02 13:57:58 crc kubenswrapper[4625]: W1202 13:57:58.640276 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f257bd9_e428_4d4c_9f32_4447bbd373e1.slice/crio-48cf1eb87bc44c758845396333cba224616b19623aa7b01f17df7254051338e8 WatchSource:0}: Error finding container 48cf1eb87bc44c758845396333cba224616b19623aa7b01f17df7254051338e8: Status 404 returned error can't find the container with id 48cf1eb87bc44c758845396333cba224616b19623aa7b01f17df7254051338e8 Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.684293 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" Dec 02 13:57:58 crc kubenswrapper[4625]: I1202 13:57:58.940129 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7"] Dec 02 13:57:58 crc kubenswrapper[4625]: W1202 13:57:58.962490 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc272e6ee_0c53_4601_bb13_b19116b52d78.slice/crio-28c4b335aaeb6b6d1ad8d96629e4099d40f40deeb09dd70fa7586a537a0459cb WatchSource:0}: Error finding container 28c4b335aaeb6b6d1ad8d96629e4099d40f40deeb09dd70fa7586a537a0459cb: Status 404 returned error can't find the container with id 28c4b335aaeb6b6d1ad8d96629e4099d40f40deeb09dd70fa7586a537a0459cb Dec 02 13:57:59 crc kubenswrapper[4625]: I1202 13:57:59.018502 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd"] Dec 02 13:57:59 crc kubenswrapper[4625]: W1202 13:57:59.029345 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52f9dddb_330a_4c13_9bb7_6a7766b6c4ec.slice/crio-5c9219708baa877249eb41e1b8a5952a51da142a8662ba7230ea6412a40d331c WatchSource:0}: Error finding container 5c9219708baa877249eb41e1b8a5952a51da142a8662ba7230ea6412a40d331c: Status 404 returned error can't find the container with id 5c9219708baa877249eb41e1b8a5952a51da142a8662ba7230ea6412a40d331c Dec 02 13:57:59 crc kubenswrapper[4625]: I1202 13:57:59.432812 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9" event={"ID":"ec26d36a-6a53-45ea-b678-ff1f2f663e4b","Type":"ContainerStarted","Data":"47c67fa85a23397654ffa008fd8e207c393ecaf247bc72d26343d131ee054751"} Dec 02 13:57:59 crc kubenswrapper[4625]: I1202 13:57:59.434377 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" event={"ID":"c272e6ee-0c53-4601-bb13-b19116b52d78","Type":"ContainerStarted","Data":"28c4b335aaeb6b6d1ad8d96629e4099d40f40deeb09dd70fa7586a537a0459cb"} Dec 02 13:57:59 crc kubenswrapper[4625]: I1202 13:57:59.435453 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" event={"ID":"52f9dddb-330a-4c13-9bb7-6a7766b6c4ec","Type":"ContainerStarted","Data":"5c9219708baa877249eb41e1b8a5952a51da142a8662ba7230ea6412a40d331c"} Dec 02 13:57:59 crc kubenswrapper[4625]: I1202 13:57:59.438201 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74d6c9d747-fdlkr" event={"ID":"7f257bd9-e428-4d4c-9f32-4447bbd373e1","Type":"ContainerStarted","Data":"169cb5bcb3a0927b5cbabfb77da73a67fe7fc52d62ddde2f2263bb931fd61fa3"} Dec 02 13:57:59 crc kubenswrapper[4625]: I1202 13:57:59.438231 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74d6c9d747-fdlkr" event={"ID":"7f257bd9-e428-4d4c-9f32-4447bbd373e1","Type":"ContainerStarted","Data":"48cf1eb87bc44c758845396333cba224616b19623aa7b01f17df7254051338e8"} Dec 02 13:57:59 crc kubenswrapper[4625]: I1202 13:57:59.457670 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74d6c9d747-fdlkr" podStartSLOduration=2.45765014 podStartE2EDuration="2.45765014s" podCreationTimestamp="2025-12-02 13:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:57:59.456978612 +0000 UTC m=+835.419155707" watchObservedRunningTime="2025-12-02 13:57:59.45765014 +0000 UTC m=+835.419827215" Dec 02 13:58:00 crc kubenswrapper[4625]: I1202 13:58:00.027543 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:58:00 crc kubenswrapper[4625]: I1202 13:58:00.109916 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:58:00 crc kubenswrapper[4625]: I1202 13:58:00.270930 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vx6q"] Dec 02 13:58:01 crc kubenswrapper[4625]: I1202 13:58:01.453330 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8vx6q" podUID="be69a743-2df9-4547-bd86-d5074ac34e04" containerName="registry-server" containerID="cri-o://edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d" gracePeriod=2 Dec 02 13:58:01 crc kubenswrapper[4625]: I1202 13:58:01.886542 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:58:01 crc kubenswrapper[4625]: I1202 13:58:01.973288 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-catalog-content\") pod \"be69a743-2df9-4547-bd86-d5074ac34e04\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " Dec 02 13:58:01 crc kubenswrapper[4625]: I1202 13:58:01.973387 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-utilities\") pod \"be69a743-2df9-4547-bd86-d5074ac34e04\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " Dec 02 13:58:01 crc kubenswrapper[4625]: I1202 13:58:01.973487 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgcfg\" (UniqueName: \"kubernetes.io/projected/be69a743-2df9-4547-bd86-d5074ac34e04-kube-api-access-rgcfg\") pod \"be69a743-2df9-4547-bd86-d5074ac34e04\" (UID: \"be69a743-2df9-4547-bd86-d5074ac34e04\") " Dec 02 13:58:01 crc kubenswrapper[4625]: I1202 13:58:01.977377 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-utilities" (OuterVolumeSpecName: "utilities") pod "be69a743-2df9-4547-bd86-d5074ac34e04" (UID: "be69a743-2df9-4547-bd86-d5074ac34e04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:58:01 crc kubenswrapper[4625]: I1202 13:58:01.984182 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be69a743-2df9-4547-bd86-d5074ac34e04-kube-api-access-rgcfg" (OuterVolumeSpecName: "kube-api-access-rgcfg") pod "be69a743-2df9-4547-bd86-d5074ac34e04" (UID: "be69a743-2df9-4547-bd86-d5074ac34e04"). InnerVolumeSpecName "kube-api-access-rgcfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.075167 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgcfg\" (UniqueName: \"kubernetes.io/projected/be69a743-2df9-4547-bd86-d5074ac34e04-kube-api-access-rgcfg\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.075197 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.090893 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be69a743-2df9-4547-bd86-d5074ac34e04" (UID: "be69a743-2df9-4547-bd86-d5074ac34e04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.177720 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be69a743-2df9-4547-bd86-d5074ac34e04-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.461223 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2gbcf" event={"ID":"a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab","Type":"ContainerStarted","Data":"7192e784c5ff59345ff0918c9c95619fd4e011f8799d8fee17ac56d903b98ec6"} Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.461523 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.464489 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9" event={"ID":"ec26d36a-6a53-45ea-b678-ff1f2f663e4b","Type":"ContainerStarted","Data":"0b657f990c237fa7fff8dbd09e16c9be76a07f064acc6dd20b4572eb8fd7133b"} Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.468021 4625 generic.go:334] "Generic (PLEG): container finished" podID="be69a743-2df9-4547-bd86-d5074ac34e04" containerID="edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d" exitCode=0 Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.468081 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vx6q" event={"ID":"be69a743-2df9-4547-bd86-d5074ac34e04","Type":"ContainerDied","Data":"edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d"} Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.468105 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vx6q" event={"ID":"be69a743-2df9-4547-bd86-d5074ac34e04","Type":"ContainerDied","Data":"50c1d12cf67edc89589b3b7d93a501c2e491ba7163c6823a6ff4992e46972196"} Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.468131 4625 scope.go:117] "RemoveContainer" containerID="edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.468247 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vx6q" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.483162 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2gbcf" podStartSLOduration=2.16311396 podStartE2EDuration="5.483138843s" podCreationTimestamp="2025-12-02 13:57:57 +0000 UTC" firstStartedPulling="2025-12-02 13:57:58.040677181 +0000 UTC m=+834.002854256" lastFinishedPulling="2025-12-02 13:58:01.360702074 +0000 UTC m=+837.322879139" observedRunningTime="2025-12-02 13:58:02.477673344 +0000 UTC m=+838.439850419" watchObservedRunningTime="2025-12-02 13:58:02.483138843 +0000 UTC m=+838.445315928" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.488770 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" event={"ID":"52f9dddb-330a-4c13-9bb7-6a7766b6c4ec","Type":"ContainerStarted","Data":"fea5d51b870477fbaf5c5b38c2827bc0196e046da30f4be1d59f1fcd0e474fdd"} Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.489264 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.514056 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" podStartSLOduration=3.201987095 podStartE2EDuration="5.514029797s" podCreationTimestamp="2025-12-02 13:57:57 +0000 UTC" firstStartedPulling="2025-12-02 13:57:59.032700426 +0000 UTC m=+834.994877491" lastFinishedPulling="2025-12-02 13:58:01.344743118 +0000 UTC m=+837.306920193" observedRunningTime="2025-12-02 13:58:02.51012341 +0000 UTC m=+838.472300485" watchObservedRunningTime="2025-12-02 13:58:02.514029797 +0000 UTC m=+838.476206872" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.534488 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vx6q"] Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.538107 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8vx6q"] Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.722274 4625 scope.go:117] "RemoveContainer" containerID="7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.795493 4625 scope.go:117] "RemoveContainer" containerID="938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.844264 4625 scope.go:117] "RemoveContainer" containerID="edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d" Dec 02 13:58:02 crc kubenswrapper[4625]: E1202 13:58:02.844754 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d\": container with ID starting with edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d not found: ID does not exist" containerID="edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.844789 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d"} err="failed to get container status \"edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d\": rpc error: code = NotFound desc = could not find container \"edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d\": container with ID starting with edacf25786a752697ad8cad9508a8bb4aee7edc379b64519f455c59c276e316d not found: ID does not exist" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.844809 4625 scope.go:117] "RemoveContainer" containerID="7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875" Dec 02 13:58:02 crc kubenswrapper[4625]: E1202 13:58:02.845419 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875\": container with ID starting with 7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875 not found: ID does not exist" containerID="7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.845441 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875"} err="failed to get container status \"7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875\": rpc error: code = NotFound desc = could not find container \"7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875\": container with ID starting with 7e53a81a8490b94215db31060c16da438688fb888b515833ebf5ea0f78093875 not found: ID does not exist" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.845465 4625 scope.go:117] "RemoveContainer" containerID="938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec" Dec 02 13:58:02 crc kubenswrapper[4625]: E1202 13:58:02.845699 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec\": container with ID starting with 938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec not found: ID does not exist" containerID="938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.845715 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec"} err="failed to get container status \"938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec\": rpc error: code = NotFound desc = could not find container \"938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec\": container with ID starting with 938c3889b4507b21ce591d45c893e738243a8232e4074f5080342c0b858e3cec not found: ID does not exist" Dec 02 13:58:02 crc kubenswrapper[4625]: I1202 13:58:02.865588 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be69a743-2df9-4547-bd86-d5074ac34e04" path="/var/lib/kubelet/pods/be69a743-2df9-4547-bd86-d5074ac34e04/volumes" Dec 02 13:58:03 crc kubenswrapper[4625]: I1202 13:58:03.499343 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" event={"ID":"c272e6ee-0c53-4601-bb13-b19116b52d78","Type":"ContainerStarted","Data":"c68edaa30677d80b231d92b2abf00b076e1f34a877449c914b2632814bd54602"} Dec 02 13:58:03 crc kubenswrapper[4625]: I1202 13:58:03.521092 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-xbmh7" podStartSLOduration=2.691894652 podStartE2EDuration="6.521066172s" podCreationTimestamp="2025-12-02 13:57:57 +0000 UTC" firstStartedPulling="2025-12-02 13:57:58.970475284 +0000 UTC m=+834.932652359" lastFinishedPulling="2025-12-02 13:58:02.799646804 +0000 UTC m=+838.761823879" observedRunningTime="2025-12-02 13:58:03.517966477 +0000 UTC m=+839.480143572" watchObservedRunningTime="2025-12-02 13:58:03.521066172 +0000 UTC m=+839.483243247" Dec 02 13:58:04 crc kubenswrapper[4625]: I1202 13:58:04.509354 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9" event={"ID":"ec26d36a-6a53-45ea-b678-ff1f2f663e4b","Type":"ContainerStarted","Data":"b4ab0d6554f41aae67d039eaba95829228c2b0e1505920b25f3380ee884b9288"} Dec 02 13:58:04 crc kubenswrapper[4625]: I1202 13:58:04.533474 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-n6vs9" podStartSLOduration=2.034871775 podStartE2EDuration="7.533444292s" podCreationTimestamp="2025-12-02 13:57:57 +0000 UTC" firstStartedPulling="2025-12-02 13:57:58.470182851 +0000 UTC m=+834.432359926" lastFinishedPulling="2025-12-02 13:58:03.968755368 +0000 UTC m=+839.930932443" observedRunningTime="2025-12-02 13:58:04.531814228 +0000 UTC m=+840.493991303" watchObservedRunningTime="2025-12-02 13:58:04.533444292 +0000 UTC m=+840.495621367" Dec 02 13:58:08 crc kubenswrapper[4625]: I1202 13:58:08.025126 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2gbcf" Dec 02 13:58:08 crc kubenswrapper[4625]: I1202 13:58:08.325748 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:58:08 crc kubenswrapper[4625]: I1202 13:58:08.325833 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:58:08 crc kubenswrapper[4625]: I1202 13:58:08.330282 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:58:08 crc kubenswrapper[4625]: I1202 13:58:08.542086 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74d6c9d747-fdlkr" Dec 02 13:58:08 crc kubenswrapper[4625]: I1202 13:58:08.609290 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-pr728"] Dec 02 13:58:18 crc kubenswrapper[4625]: I1202 13:58:18.469048 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-27pvd" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.675519 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-pr728" podUID="15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" containerName="console" containerID="cri-o://f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529" gracePeriod=15 Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.778670 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv"] Dec 02 13:58:33 crc kubenswrapper[4625]: E1202 13:58:33.778918 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be69a743-2df9-4547-bd86-d5074ac34e04" containerName="extract-utilities" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.778931 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="be69a743-2df9-4547-bd86-d5074ac34e04" containerName="extract-utilities" Dec 02 13:58:33 crc kubenswrapper[4625]: E1202 13:58:33.778945 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be69a743-2df9-4547-bd86-d5074ac34e04" containerName="registry-server" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.778951 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="be69a743-2df9-4547-bd86-d5074ac34e04" containerName="registry-server" Dec 02 13:58:33 crc kubenswrapper[4625]: E1202 13:58:33.778966 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be69a743-2df9-4547-bd86-d5074ac34e04" containerName="extract-content" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.778973 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="be69a743-2df9-4547-bd86-d5074ac34e04" containerName="extract-content" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.779069 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="be69a743-2df9-4547-bd86-d5074ac34e04" containerName="registry-server" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.779840 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.782211 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.808372 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv"] Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.853818 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltgpw\" (UniqueName: \"kubernetes.io/projected/40ec79b5-40cb-49e8-b693-c63f4066b8ed-kube-api-access-ltgpw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.854337 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.854466 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.958782 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.958934 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltgpw\" (UniqueName: \"kubernetes.io/projected/40ec79b5-40cb-49e8-b693-c63f4066b8ed-kube-api-access-ltgpw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.958997 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.960287 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.961270 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:33 crc kubenswrapper[4625]: I1202 13:58:33.991139 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltgpw\" (UniqueName: \"kubernetes.io/projected/40ec79b5-40cb-49e8-b693-c63f4066b8ed-kube-api-access-ltgpw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.077106 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-pr728_15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5/console/0.log" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.077262 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.142427 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.161830 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-serving-cert\") pod \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.161896 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8pwk\" (UniqueName: \"kubernetes.io/projected/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-kube-api-access-p8pwk\") pod \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.161979 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-oauth-config\") pod \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.162006 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-oauth-serving-cert\") pod \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.162110 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-config\") pod \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.162172 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-trusted-ca-bundle\") pod \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.162200 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-service-ca\") pod \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\" (UID: \"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5\") " Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.163624 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-service-ca" (OuterVolumeSpecName: "service-ca") pod "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" (UID: "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.165207 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-config" (OuterVolumeSpecName: "console-config") pod "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" (UID: "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.165302 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" (UID: "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.165300 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" (UID: "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.170129 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-kube-api-access-p8pwk" (OuterVolumeSpecName: "kube-api-access-p8pwk") pod "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" (UID: "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5"). InnerVolumeSpecName "kube-api-access-p8pwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.170173 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" (UID: "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.171165 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" (UID: "15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.264896 4625 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.264928 4625 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.264938 4625 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-config\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.264946 4625 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.264956 4625 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.264964 4625 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.264973 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8pwk\" (UniqueName: \"kubernetes.io/projected/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5-kube-api-access-p8pwk\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.373496 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv"] Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.713288 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" event={"ID":"40ec79b5-40cb-49e8-b693-c63f4066b8ed","Type":"ContainerStarted","Data":"554d67ed45dfee1ebcb2177adbf20248ba33c436df75b28bbfda873f2e19f55b"} Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.715963 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-pr728_15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5/console/0.log" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.716023 4625 generic.go:334] "Generic (PLEG): container finished" podID="15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" containerID="f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529" exitCode=2 Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.716058 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pr728" event={"ID":"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5","Type":"ContainerDied","Data":"f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529"} Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.716082 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pr728" event={"ID":"15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5","Type":"ContainerDied","Data":"271711ab94655d0bd69f32a28bc363fd39153e4f1f890c7beec59880ed923370"} Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.716107 4625 scope.go:117] "RemoveContainer" containerID="f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.716264 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pr728" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.738288 4625 scope.go:117] "RemoveContainer" containerID="f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529" Dec 02 13:58:34 crc kubenswrapper[4625]: E1202 13:58:34.738959 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529\": container with ID starting with f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529 not found: ID does not exist" containerID="f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.739009 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529"} err="failed to get container status \"f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529\": rpc error: code = NotFound desc = could not find container \"f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529\": container with ID starting with f3117bbd66580a445924b4f4ff214aeedfccce391a0459d7d403aa20fbf62529 not found: ID does not exist" Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.759800 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-pr728"] Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.764098 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-pr728"] Dec 02 13:58:34 crc kubenswrapper[4625]: I1202 13:58:34.867267 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" path="/var/lib/kubelet/pods/15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5/volumes" Dec 02 13:58:35 crc kubenswrapper[4625]: I1202 13:58:35.730272 4625 generic.go:334] "Generic (PLEG): container finished" podID="40ec79b5-40cb-49e8-b693-c63f4066b8ed" containerID="722c64721caeeb465bfef3283f13b4da6903d89b6e3a1af3dc7fb7d905bfd5e0" exitCode=0 Dec 02 13:58:35 crc kubenswrapper[4625]: I1202 13:58:35.730382 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" event={"ID":"40ec79b5-40cb-49e8-b693-c63f4066b8ed","Type":"ContainerDied","Data":"722c64721caeeb465bfef3283f13b4da6903d89b6e3a1af3dc7fb7d905bfd5e0"} Dec 02 13:58:37 crc kubenswrapper[4625]: I1202 13:58:37.746485 4625 generic.go:334] "Generic (PLEG): container finished" podID="40ec79b5-40cb-49e8-b693-c63f4066b8ed" containerID="dbd5db9de4c7f28bb46e52b1e3fd84b95f202565789d8bb56586134b1585e018" exitCode=0 Dec 02 13:58:37 crc kubenswrapper[4625]: I1202 13:58:37.746805 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" event={"ID":"40ec79b5-40cb-49e8-b693-c63f4066b8ed","Type":"ContainerDied","Data":"dbd5db9de4c7f28bb46e52b1e3fd84b95f202565789d8bb56586134b1585e018"} Dec 02 13:58:38 crc kubenswrapper[4625]: I1202 13:58:38.758342 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" event={"ID":"40ec79b5-40cb-49e8-b693-c63f4066b8ed","Type":"ContainerDied","Data":"6292b8014b38392631ce312604c2054b05a0491398d2818e61cfe54d19e8dfc2"} Dec 02 13:58:38 crc kubenswrapper[4625]: I1202 13:58:38.758272 4625 generic.go:334] "Generic (PLEG): container finished" podID="40ec79b5-40cb-49e8-b693-c63f4066b8ed" containerID="6292b8014b38392631ce312604c2054b05a0491398d2818e61cfe54d19e8dfc2" exitCode=0 Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.028950 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.056893 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltgpw\" (UniqueName: \"kubernetes.io/projected/40ec79b5-40cb-49e8-b693-c63f4066b8ed-kube-api-access-ltgpw\") pod \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.057143 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-util\") pod \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.057217 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-bundle\") pod \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\" (UID: \"40ec79b5-40cb-49e8-b693-c63f4066b8ed\") " Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.059379 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-bundle" (OuterVolumeSpecName: "bundle") pod "40ec79b5-40cb-49e8-b693-c63f4066b8ed" (UID: "40ec79b5-40cb-49e8-b693-c63f4066b8ed"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.071573 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ec79b5-40cb-49e8-b693-c63f4066b8ed-kube-api-access-ltgpw" (OuterVolumeSpecName: "kube-api-access-ltgpw") pod "40ec79b5-40cb-49e8-b693-c63f4066b8ed" (UID: "40ec79b5-40cb-49e8-b693-c63f4066b8ed"). InnerVolumeSpecName "kube-api-access-ltgpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.073644 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-util" (OuterVolumeSpecName: "util") pod "40ec79b5-40cb-49e8-b693-c63f4066b8ed" (UID: "40ec79b5-40cb-49e8-b693-c63f4066b8ed"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.158715 4625 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-util\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.158777 4625 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40ec79b5-40cb-49e8-b693-c63f4066b8ed-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.158786 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltgpw\" (UniqueName: \"kubernetes.io/projected/40ec79b5-40cb-49e8-b693-c63f4066b8ed-kube-api-access-ltgpw\") on node \"crc\" DevicePath \"\"" Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.792913 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" event={"ID":"40ec79b5-40cb-49e8-b693-c63f4066b8ed","Type":"ContainerDied","Data":"554d67ed45dfee1ebcb2177adbf20248ba33c436df75b28bbfda873f2e19f55b"} Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.792953 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv" Dec 02 13:58:40 crc kubenswrapper[4625]: I1202 13:58:40.792971 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="554d67ed45dfee1ebcb2177adbf20248ba33c436df75b28bbfda873f2e19f55b" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.780812 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t"] Dec 02 13:58:48 crc kubenswrapper[4625]: E1202 13:58:48.781959 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" containerName="console" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.781979 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" containerName="console" Dec 02 13:58:48 crc kubenswrapper[4625]: E1202 13:58:48.782007 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ec79b5-40cb-49e8-b693-c63f4066b8ed" containerName="pull" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.782014 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ec79b5-40cb-49e8-b693-c63f4066b8ed" containerName="pull" Dec 02 13:58:48 crc kubenswrapper[4625]: E1202 13:58:48.782031 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ec79b5-40cb-49e8-b693-c63f4066b8ed" containerName="util" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.782041 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ec79b5-40cb-49e8-b693-c63f4066b8ed" containerName="util" Dec 02 13:58:48 crc kubenswrapper[4625]: E1202 13:58:48.782055 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ec79b5-40cb-49e8-b693-c63f4066b8ed" containerName="extract" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.782064 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ec79b5-40cb-49e8-b693-c63f4066b8ed" containerName="extract" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.782178 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b0cdd0-25e3-4c9b-b7fc-4e19a99093b5" containerName="console" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.782194 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ec79b5-40cb-49e8-b693-c63f4066b8ed" containerName="extract" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.782729 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.786626 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.788520 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.794495 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-sdszz" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.796914 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.797947 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.813571 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t"] Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.901280 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3fd6af02-5a39-495f-8365-cd8ec3f3b051-apiservice-cert\") pod \"metallb-operator-controller-manager-5f95f47f79-qms5t\" (UID: \"3fd6af02-5a39-495f-8365-cd8ec3f3b051\") " pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.901416 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zsht\" (UniqueName: \"kubernetes.io/projected/3fd6af02-5a39-495f-8365-cd8ec3f3b051-kube-api-access-8zsht\") pod \"metallb-operator-controller-manager-5f95f47f79-qms5t\" (UID: \"3fd6af02-5a39-495f-8365-cd8ec3f3b051\") " pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:48 crc kubenswrapper[4625]: I1202 13:58:48.901449 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3fd6af02-5a39-495f-8365-cd8ec3f3b051-webhook-cert\") pod \"metallb-operator-controller-manager-5f95f47f79-qms5t\" (UID: \"3fd6af02-5a39-495f-8365-cd8ec3f3b051\") " pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.002912 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3fd6af02-5a39-495f-8365-cd8ec3f3b051-apiservice-cert\") pod \"metallb-operator-controller-manager-5f95f47f79-qms5t\" (UID: \"3fd6af02-5a39-495f-8365-cd8ec3f3b051\") " pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.003007 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zsht\" (UniqueName: \"kubernetes.io/projected/3fd6af02-5a39-495f-8365-cd8ec3f3b051-kube-api-access-8zsht\") pod \"metallb-operator-controller-manager-5f95f47f79-qms5t\" (UID: \"3fd6af02-5a39-495f-8365-cd8ec3f3b051\") " pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.003030 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3fd6af02-5a39-495f-8365-cd8ec3f3b051-webhook-cert\") pod \"metallb-operator-controller-manager-5f95f47f79-qms5t\" (UID: \"3fd6af02-5a39-495f-8365-cd8ec3f3b051\") " pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.017287 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3fd6af02-5a39-495f-8365-cd8ec3f3b051-webhook-cert\") pod \"metallb-operator-controller-manager-5f95f47f79-qms5t\" (UID: \"3fd6af02-5a39-495f-8365-cd8ec3f3b051\") " pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.031993 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3fd6af02-5a39-495f-8365-cd8ec3f3b051-apiservice-cert\") pod \"metallb-operator-controller-manager-5f95f47f79-qms5t\" (UID: \"3fd6af02-5a39-495f-8365-cd8ec3f3b051\") " pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.041361 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zsht\" (UniqueName: \"kubernetes.io/projected/3fd6af02-5a39-495f-8365-cd8ec3f3b051-kube-api-access-8zsht\") pod \"metallb-operator-controller-manager-5f95f47f79-qms5t\" (UID: \"3fd6af02-5a39-495f-8365-cd8ec3f3b051\") " pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.102926 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.196437 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v"] Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.197354 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.200454 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.200782 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-skzl2" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.202537 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.215675 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v"] Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.311260 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3980c773-00e8-4019-972e-e0f2f9724185-webhook-cert\") pod \"metallb-operator-webhook-server-867d4dc474-l4c4v\" (UID: \"3980c773-00e8-4019-972e-e0f2f9724185\") " pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.311865 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3980c773-00e8-4019-972e-e0f2f9724185-apiservice-cert\") pod \"metallb-operator-webhook-server-867d4dc474-l4c4v\" (UID: \"3980c773-00e8-4019-972e-e0f2f9724185\") " pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.311887 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzzlr\" (UniqueName: \"kubernetes.io/projected/3980c773-00e8-4019-972e-e0f2f9724185-kube-api-access-vzzlr\") pod \"metallb-operator-webhook-server-867d4dc474-l4c4v\" (UID: \"3980c773-00e8-4019-972e-e0f2f9724185\") " pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.415252 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3980c773-00e8-4019-972e-e0f2f9724185-apiservice-cert\") pod \"metallb-operator-webhook-server-867d4dc474-l4c4v\" (UID: \"3980c773-00e8-4019-972e-e0f2f9724185\") " pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.415340 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzzlr\" (UniqueName: \"kubernetes.io/projected/3980c773-00e8-4019-972e-e0f2f9724185-kube-api-access-vzzlr\") pod \"metallb-operator-webhook-server-867d4dc474-l4c4v\" (UID: \"3980c773-00e8-4019-972e-e0f2f9724185\") " pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.415372 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3980c773-00e8-4019-972e-e0f2f9724185-webhook-cert\") pod \"metallb-operator-webhook-server-867d4dc474-l4c4v\" (UID: \"3980c773-00e8-4019-972e-e0f2f9724185\") " pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.426614 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3980c773-00e8-4019-972e-e0f2f9724185-webhook-cert\") pod \"metallb-operator-webhook-server-867d4dc474-l4c4v\" (UID: \"3980c773-00e8-4019-972e-e0f2f9724185\") " pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.438178 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzzlr\" (UniqueName: \"kubernetes.io/projected/3980c773-00e8-4019-972e-e0f2f9724185-kube-api-access-vzzlr\") pod \"metallb-operator-webhook-server-867d4dc474-l4c4v\" (UID: \"3980c773-00e8-4019-972e-e0f2f9724185\") " pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.441058 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3980c773-00e8-4019-972e-e0f2f9724185-apiservice-cert\") pod \"metallb-operator-webhook-server-867d4dc474-l4c4v\" (UID: \"3980c773-00e8-4019-972e-e0f2f9724185\") " pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.522889 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:58:49 crc kubenswrapper[4625]: I1202 13:58:49.887044 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t"] Dec 02 13:58:50 crc kubenswrapper[4625]: I1202 13:58:50.019720 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v"] Dec 02 13:58:50 crc kubenswrapper[4625]: W1202 13:58:50.060247 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3980c773_00e8_4019_972e_e0f2f9724185.slice/crio-07009aea364f0e042eaba1f8e0b22dcfbaca9ab02c9226aa1a7808b219d7e15d WatchSource:0}: Error finding container 07009aea364f0e042eaba1f8e0b22dcfbaca9ab02c9226aa1a7808b219d7e15d: Status 404 returned error can't find the container with id 07009aea364f0e042eaba1f8e0b22dcfbaca9ab02c9226aa1a7808b219d7e15d Dec 02 13:58:50 crc kubenswrapper[4625]: I1202 13:58:50.872449 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" event={"ID":"3fd6af02-5a39-495f-8365-cd8ec3f3b051","Type":"ContainerStarted","Data":"002dc536b98c04c35c9b6916e555d5a85acb27c745ed45e17519220dd8600116"} Dec 02 13:58:50 crc kubenswrapper[4625]: I1202 13:58:50.874215 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" event={"ID":"3980c773-00e8-4019-972e-e0f2f9724185","Type":"ContainerStarted","Data":"07009aea364f0e042eaba1f8e0b22dcfbaca9ab02c9226aa1a7808b219d7e15d"} Dec 02 13:59:02 crc kubenswrapper[4625]: I1202 13:59:02.113818 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" event={"ID":"3fd6af02-5a39-495f-8365-cd8ec3f3b051","Type":"ContainerStarted","Data":"20ccf2e9cbc21db0cf3a7a8c5076979bdab961b8fa8065f3affe857da7eabb22"} Dec 02 13:59:02 crc kubenswrapper[4625]: I1202 13:59:02.114438 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:59:02 crc kubenswrapper[4625]: I1202 13:59:02.173275 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" podStartSLOduration=2.394508501 podStartE2EDuration="14.173255345s" podCreationTimestamp="2025-12-02 13:58:48 +0000 UTC" firstStartedPulling="2025-12-02 13:58:49.897849129 +0000 UTC m=+885.860026204" lastFinishedPulling="2025-12-02 13:59:01.676595973 +0000 UTC m=+897.638773048" observedRunningTime="2025-12-02 13:59:02.167522131 +0000 UTC m=+898.129699196" watchObservedRunningTime="2025-12-02 13:59:02.173255345 +0000 UTC m=+898.135432420" Dec 02 13:59:03 crc kubenswrapper[4625]: I1202 13:59:03.133405 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" event={"ID":"3980c773-00e8-4019-972e-e0f2f9724185","Type":"ContainerStarted","Data":"e7378115aa5eb236e4cf5e60a2b49dc9858c5e798fa68f3111922335f6aae6dd"} Dec 02 13:59:03 crc kubenswrapper[4625]: I1202 13:59:03.133470 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:59:03 crc kubenswrapper[4625]: I1202 13:59:03.166401 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" podStartSLOduration=2.5559290409999997 podStartE2EDuration="14.166373495s" podCreationTimestamp="2025-12-02 13:58:49 +0000 UTC" firstStartedPulling="2025-12-02 13:58:50.066092077 +0000 UTC m=+886.028269152" lastFinishedPulling="2025-12-02 13:59:01.676536531 +0000 UTC m=+897.638713606" observedRunningTime="2025-12-02 13:59:03.161397062 +0000 UTC m=+899.123574157" watchObservedRunningTime="2025-12-02 13:59:03.166373495 +0000 UTC m=+899.128550570" Dec 02 13:59:19 crc kubenswrapper[4625]: I1202 13:59:19.528420 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-867d4dc474-l4c4v" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.105426 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5f95f47f79-qms5t" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.874758 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-m2rh6"] Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.877535 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.879954 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.880415 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb"] Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.880476 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.881275 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.881745 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-92zcz" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.883403 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.915409 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-frr-conf\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.915457 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2bdff728-939b-414c-a0e9-35520fc54d71-frr-startup\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.915522 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bdff728-939b-414c-a0e9-35520fc54d71-metrics-certs\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.915551 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aad37202-ae48-4da9-b478-fad57dd764f2-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-72kpb\" (UID: \"aad37202-ae48-4da9-b478-fad57dd764f2\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.915668 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-frr-sockets\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.915746 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-metrics\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.915772 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzh4h\" (UniqueName: \"kubernetes.io/projected/2bdff728-939b-414c-a0e9-35520fc54d71-kube-api-access-lzh4h\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.915798 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpsqp\" (UniqueName: \"kubernetes.io/projected/aad37202-ae48-4da9-b478-fad57dd764f2-kube-api-access-hpsqp\") pod \"frr-k8s-webhook-server-7fcb986d4-72kpb\" (UID: \"aad37202-ae48-4da9-b478-fad57dd764f2\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.915826 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-reloader\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.917281 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb"] Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.983199 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-vrxh6"] Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.984174 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vrxh6" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.986870 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.987700 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.987767 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6bn5h" Dec 02 13:59:39 crc kubenswrapper[4625]: I1202 13:59:39.991289 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017218 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-memberlist\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017269 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-metallb-excludel2\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017304 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-frr-conf\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017337 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2bdff728-939b-414c-a0e9-35520fc54d71-frr-startup\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017370 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bdff728-939b-414c-a0e9-35520fc54d71-metrics-certs\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017417 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aad37202-ae48-4da9-b478-fad57dd764f2-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-72kpb\" (UID: \"aad37202-ae48-4da9-b478-fad57dd764f2\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017441 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-frr-sockets\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017480 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-metrics-certs\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017507 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpsqp\" (UniqueName: \"kubernetes.io/projected/aad37202-ae48-4da9-b478-fad57dd764f2-kube-api-access-hpsqp\") pod \"frr-k8s-webhook-server-7fcb986d4-72kpb\" (UID: \"aad37202-ae48-4da9-b478-fad57dd764f2\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017525 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-metrics\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017550 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzh4h\" (UniqueName: \"kubernetes.io/projected/2bdff728-939b-414c-a0e9-35520fc54d71-kube-api-access-lzh4h\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017571 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-reloader\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.017603 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5kbr\" (UniqueName: \"kubernetes.io/projected/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-kube-api-access-r5kbr\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.018129 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-frr-conf\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.019051 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2bdff728-939b-414c-a0e9-35520fc54d71-frr-startup\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: E1202 13:59:40.020140 4625 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 02 13:59:40 crc kubenswrapper[4625]: E1202 13:59:40.020197 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aad37202-ae48-4da9-b478-fad57dd764f2-cert podName:aad37202-ae48-4da9-b478-fad57dd764f2 nodeName:}" failed. No retries permitted until 2025-12-02 13:59:40.520179498 +0000 UTC m=+936.482356573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aad37202-ae48-4da9-b478-fad57dd764f2-cert") pod "frr-k8s-webhook-server-7fcb986d4-72kpb" (UID: "aad37202-ae48-4da9-b478-fad57dd764f2") : secret "frr-k8s-webhook-server-cert" not found Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.020700 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-frr-sockets\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.021111 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-metrics\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.021435 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2bdff728-939b-414c-a0e9-35520fc54d71-reloader\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.027011 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-2jqbl"] Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.028367 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.032090 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.043042 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2bdff728-939b-414c-a0e9-35520fc54d71-metrics-certs\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.048178 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-2jqbl"] Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.054488 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzh4h\" (UniqueName: \"kubernetes.io/projected/2bdff728-939b-414c-a0e9-35520fc54d71-kube-api-access-lzh4h\") pod \"frr-k8s-m2rh6\" (UID: \"2bdff728-939b-414c-a0e9-35520fc54d71\") " pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.056002 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpsqp\" (UniqueName: \"kubernetes.io/projected/aad37202-ae48-4da9-b478-fad57dd764f2-kube-api-access-hpsqp\") pod \"frr-k8s-webhook-server-7fcb986d4-72kpb\" (UID: \"aad37202-ae48-4da9-b478-fad57dd764f2\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.119450 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5bf7d269-353b-4ac4-a7e5-02c0cd01d62a-metrics-certs\") pod \"controller-f8648f98b-2jqbl\" (UID: \"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a\") " pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.119505 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-metrics-certs\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.119549 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bddv2\" (UniqueName: \"kubernetes.io/projected/5bf7d269-353b-4ac4-a7e5-02c0cd01d62a-kube-api-access-bddv2\") pod \"controller-f8648f98b-2jqbl\" (UID: \"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a\") " pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.119608 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bf7d269-353b-4ac4-a7e5-02c0cd01d62a-cert\") pod \"controller-f8648f98b-2jqbl\" (UID: \"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a\") " pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.119642 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5kbr\" (UniqueName: \"kubernetes.io/projected/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-kube-api-access-r5kbr\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.119672 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-memberlist\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.119701 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-metallb-excludel2\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.120828 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-metallb-excludel2\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: E1202 13:59:40.121093 4625 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 13:59:40 crc kubenswrapper[4625]: E1202 13:59:40.121192 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-memberlist podName:76b2f8e3-7f6f-4592-a2e1-542b76f8872d nodeName:}" failed. No retries permitted until 2025-12-02 13:59:40.621166672 +0000 UTC m=+936.583343817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-memberlist") pod "speaker-vrxh6" (UID: "76b2f8e3-7f6f-4592-a2e1-542b76f8872d") : secret "metallb-memberlist" not found Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.138096 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-metrics-certs\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.146743 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5kbr\" (UniqueName: \"kubernetes.io/projected/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-kube-api-access-r5kbr\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.198888 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.220817 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bf7d269-353b-4ac4-a7e5-02c0cd01d62a-cert\") pod \"controller-f8648f98b-2jqbl\" (UID: \"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a\") " pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.220951 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5bf7d269-353b-4ac4-a7e5-02c0cd01d62a-metrics-certs\") pod \"controller-f8648f98b-2jqbl\" (UID: \"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a\") " pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.220995 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bddv2\" (UniqueName: \"kubernetes.io/projected/5bf7d269-353b-4ac4-a7e5-02c0cd01d62a-kube-api-access-bddv2\") pod \"controller-f8648f98b-2jqbl\" (UID: \"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a\") " pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.224744 4625 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.227376 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5bf7d269-353b-4ac4-a7e5-02c0cd01d62a-metrics-certs\") pod \"controller-f8648f98b-2jqbl\" (UID: \"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a\") " pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.239180 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5bf7d269-353b-4ac4-a7e5-02c0cd01d62a-cert\") pod \"controller-f8648f98b-2jqbl\" (UID: \"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a\") " pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.243201 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bddv2\" (UniqueName: \"kubernetes.io/projected/5bf7d269-353b-4ac4-a7e5-02c0cd01d62a-kube-api-access-bddv2\") pod \"controller-f8648f98b-2jqbl\" (UID: \"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a\") " pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.410971 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.434569 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m2rh6" event={"ID":"2bdff728-939b-414c-a0e9-35520fc54d71","Type":"ContainerStarted","Data":"d4acea3fc7c91de92e9c7f34125f7895fe3fc26f65d51457e022afcb3cfc2c0c"} Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.524402 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aad37202-ae48-4da9-b478-fad57dd764f2-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-72kpb\" (UID: \"aad37202-ae48-4da9-b478-fad57dd764f2\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.528425 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aad37202-ae48-4da9-b478-fad57dd764f2-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-72kpb\" (UID: \"aad37202-ae48-4da9-b478-fad57dd764f2\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.625939 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-memberlist\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:40 crc kubenswrapper[4625]: E1202 13:59:40.626089 4625 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 02 13:59:40 crc kubenswrapper[4625]: E1202 13:59:40.626148 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-memberlist podName:76b2f8e3-7f6f-4592-a2e1-542b76f8872d nodeName:}" failed. No retries permitted until 2025-12-02 13:59:41.626126617 +0000 UTC m=+937.588303692 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-memberlist") pod "speaker-vrxh6" (UID: "76b2f8e3-7f6f-4592-a2e1-542b76f8872d") : secret "metallb-memberlist" not found Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.813818 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 13:59:40 crc kubenswrapper[4625]: I1202 13:59:40.853117 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-2jqbl"] Dec 02 13:59:40 crc kubenswrapper[4625]: W1202 13:59:40.855542 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bf7d269_353b_4ac4_a7e5_02c0cd01d62a.slice/crio-e2ad78ccf5f656f03db7072ce777fdebb948d3bbf790d6757c7faeda239bbe0f WatchSource:0}: Error finding container e2ad78ccf5f656f03db7072ce777fdebb948d3bbf790d6757c7faeda239bbe0f: Status 404 returned error can't find the container with id e2ad78ccf5f656f03db7072ce777fdebb948d3bbf790d6757c7faeda239bbe0f Dec 02 13:59:41 crc kubenswrapper[4625]: I1202 13:59:41.222065 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb"] Dec 02 13:59:41 crc kubenswrapper[4625]: W1202 13:59:41.236804 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaad37202_ae48_4da9_b478_fad57dd764f2.slice/crio-b20123fd54b3f06d3119d280da0dcffa37d0745f2f1051d03a287f57162accf6 WatchSource:0}: Error finding container b20123fd54b3f06d3119d280da0dcffa37d0745f2f1051d03a287f57162accf6: Status 404 returned error can't find the container with id b20123fd54b3f06d3119d280da0dcffa37d0745f2f1051d03a287f57162accf6 Dec 02 13:59:41 crc kubenswrapper[4625]: I1202 13:59:41.440640 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" event={"ID":"aad37202-ae48-4da9-b478-fad57dd764f2","Type":"ContainerStarted","Data":"b20123fd54b3f06d3119d280da0dcffa37d0745f2f1051d03a287f57162accf6"} Dec 02 13:59:41 crc kubenswrapper[4625]: I1202 13:59:41.442969 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-2jqbl" event={"ID":"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a","Type":"ContainerStarted","Data":"23f3ed793a4352fe3f5cff71f99b96bad586ce990874701b579bf250be7253bb"} Dec 02 13:59:41 crc kubenswrapper[4625]: I1202 13:59:41.443083 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-2jqbl" event={"ID":"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a","Type":"ContainerStarted","Data":"7cea09d09a8502e8c569df847e02b5a42a2c94596007bc5d7e002d37845f04a1"} Dec 02 13:59:41 crc kubenswrapper[4625]: I1202 13:59:41.443141 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-2jqbl" event={"ID":"5bf7d269-353b-4ac4-a7e5-02c0cd01d62a","Type":"ContainerStarted","Data":"e2ad78ccf5f656f03db7072ce777fdebb948d3bbf790d6757c7faeda239bbe0f"} Dec 02 13:59:41 crc kubenswrapper[4625]: I1202 13:59:41.444099 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 13:59:41 crc kubenswrapper[4625]: I1202 13:59:41.462609 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-2jqbl" podStartSLOduration=2.462588259 podStartE2EDuration="2.462588259s" podCreationTimestamp="2025-12-02 13:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:59:41.457936491 +0000 UTC m=+937.420113566" watchObservedRunningTime="2025-12-02 13:59:41.462588259 +0000 UTC m=+937.424765334" Dec 02 13:59:41 crc kubenswrapper[4625]: I1202 13:59:41.643686 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-memberlist\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:41 crc kubenswrapper[4625]: I1202 13:59:41.651411 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/76b2f8e3-7f6f-4592-a2e1-542b76f8872d-memberlist\") pod \"speaker-vrxh6\" (UID: \"76b2f8e3-7f6f-4592-a2e1-542b76f8872d\") " pod="metallb-system/speaker-vrxh6" Dec 02 13:59:41 crc kubenswrapper[4625]: I1202 13:59:41.800563 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vrxh6" Dec 02 13:59:41 crc kubenswrapper[4625]: W1202 13:59:41.826661 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76b2f8e3_7f6f_4592_a2e1_542b76f8872d.slice/crio-e7efa54e6d223cf9dfe9ae3dd22fa925dbab239335a944e15dd5888c4234caa6 WatchSource:0}: Error finding container e7efa54e6d223cf9dfe9ae3dd22fa925dbab239335a944e15dd5888c4234caa6: Status 404 returned error can't find the container with id e7efa54e6d223cf9dfe9ae3dd22fa925dbab239335a944e15dd5888c4234caa6 Dec 02 13:59:42 crc kubenswrapper[4625]: I1202 13:59:42.460018 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vrxh6" event={"ID":"76b2f8e3-7f6f-4592-a2e1-542b76f8872d","Type":"ContainerStarted","Data":"e15a13549e36bb56f0963c0f2e75ee03fb6149a1cc72e333919da1d1ed123647"} Dec 02 13:59:42 crc kubenswrapper[4625]: I1202 13:59:42.460325 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vrxh6" event={"ID":"76b2f8e3-7f6f-4592-a2e1-542b76f8872d","Type":"ContainerStarted","Data":"e7efa54e6d223cf9dfe9ae3dd22fa925dbab239335a944e15dd5888c4234caa6"} Dec 02 13:59:43 crc kubenswrapper[4625]: I1202 13:59:43.476553 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vrxh6" event={"ID":"76b2f8e3-7f6f-4592-a2e1-542b76f8872d","Type":"ContainerStarted","Data":"2ebf3ecea12f35f668a444e49b8b714408b3611515de32c7154a9cd89f7d953d"} Dec 02 13:59:43 crc kubenswrapper[4625]: I1202 13:59:43.476611 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-vrxh6" Dec 02 13:59:44 crc kubenswrapper[4625]: I1202 13:59:44.885992 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-vrxh6" podStartSLOduration=5.885935707 podStartE2EDuration="5.885935707s" podCreationTimestamp="2025-12-02 13:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 13:59:43.510020342 +0000 UTC m=+939.472197417" watchObservedRunningTime="2025-12-02 13:59:44.885935707 +0000 UTC m=+940.848112782" Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.774405 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g5phf"] Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.776558 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.790680 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5phf"] Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.859788 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmt4\" (UniqueName: \"kubernetes.io/projected/c7f86937-9b63-4748-9d3c-117cc276c910-kube-api-access-5rmt4\") pod \"redhat-marketplace-g5phf\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.860390 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-catalog-content\") pod \"redhat-marketplace-g5phf\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.860571 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-utilities\") pod \"redhat-marketplace-g5phf\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.962509 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-utilities\") pod \"redhat-marketplace-g5phf\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.962609 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rmt4\" (UniqueName: \"kubernetes.io/projected/c7f86937-9b63-4748-9d3c-117cc276c910-kube-api-access-5rmt4\") pod \"redhat-marketplace-g5phf\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.962714 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-catalog-content\") pod \"redhat-marketplace-g5phf\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.963225 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-catalog-content\") pod \"redhat-marketplace-g5phf\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.964547 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-utilities\") pod \"redhat-marketplace-g5phf\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:52 crc kubenswrapper[4625]: I1202 13:59:52.998660 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rmt4\" (UniqueName: \"kubernetes.io/projected/c7f86937-9b63-4748-9d3c-117cc276c910-kube-api-access-5rmt4\") pod \"redhat-marketplace-g5phf\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:53 crc kubenswrapper[4625]: I1202 13:59:53.098034 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 13:59:53 crc kubenswrapper[4625]: I1202 13:59:53.574221 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5phf"] Dec 02 13:59:53 crc kubenswrapper[4625]: W1202 13:59:53.578517 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7f86937_9b63_4748_9d3c_117cc276c910.slice/crio-0109d20df49d269d02f819a07b9ba65df1e7e677c153e65504d412e27bd055a7 WatchSource:0}: Error finding container 0109d20df49d269d02f819a07b9ba65df1e7e677c153e65504d412e27bd055a7: Status 404 returned error can't find the container with id 0109d20df49d269d02f819a07b9ba65df1e7e677c153e65504d412e27bd055a7 Dec 02 13:59:53 crc kubenswrapper[4625]: I1202 13:59:53.592900 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5phf" event={"ID":"c7f86937-9b63-4748-9d3c-117cc276c910","Type":"ContainerStarted","Data":"0109d20df49d269d02f819a07b9ba65df1e7e677c153e65504d412e27bd055a7"} Dec 02 13:59:53 crc kubenswrapper[4625]: I1202 13:59:53.595584 4625 generic.go:334] "Generic (PLEG): container finished" podID="2bdff728-939b-414c-a0e9-35520fc54d71" containerID="04d7e314e96b4190798529ce7a431b0970e41d2ecb761318f1b6f8b79a35c408" exitCode=0 Dec 02 13:59:53 crc kubenswrapper[4625]: I1202 13:59:53.595680 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m2rh6" event={"ID":"2bdff728-939b-414c-a0e9-35520fc54d71","Type":"ContainerDied","Data":"04d7e314e96b4190798529ce7a431b0970e41d2ecb761318f1b6f8b79a35c408"} Dec 02 13:59:53 crc kubenswrapper[4625]: I1202 13:59:53.604751 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" event={"ID":"aad37202-ae48-4da9-b478-fad57dd764f2","Type":"ContainerStarted","Data":"7e0f075bc7f3cd6d0d8233fced51840121c341788db67f2d5c2598686300046b"} Dec 02 13:59:53 crc kubenswrapper[4625]: I1202 13:59:53.605624 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 13:59:53 crc kubenswrapper[4625]: I1202 13:59:53.991178 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" podStartSLOduration=3.390170024 podStartE2EDuration="14.991148389s" podCreationTimestamp="2025-12-02 13:59:39 +0000 UTC" firstStartedPulling="2025-12-02 13:59:41.240116726 +0000 UTC m=+937.202293801" lastFinishedPulling="2025-12-02 13:59:52.841095101 +0000 UTC m=+948.803272166" observedRunningTime="2025-12-02 13:59:53.988939089 +0000 UTC m=+949.951116164" watchObservedRunningTime="2025-12-02 13:59:53.991148389 +0000 UTC m=+949.953325464" Dec 02 13:59:54 crc kubenswrapper[4625]: I1202 13:59:54.616573 4625 generic.go:334] "Generic (PLEG): container finished" podID="c7f86937-9b63-4748-9d3c-117cc276c910" containerID="f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c" exitCode=0 Dec 02 13:59:54 crc kubenswrapper[4625]: I1202 13:59:54.616687 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5phf" event={"ID":"c7f86937-9b63-4748-9d3c-117cc276c910","Type":"ContainerDied","Data":"f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c"} Dec 02 13:59:54 crc kubenswrapper[4625]: I1202 13:59:54.619899 4625 generic.go:334] "Generic (PLEG): container finished" podID="2bdff728-939b-414c-a0e9-35520fc54d71" containerID="be961f4a5e499e181fb8596a9d2c33c279c0845124584c5f9f84a745dcf08df4" exitCode=0 Dec 02 13:59:54 crc kubenswrapper[4625]: I1202 13:59:54.620080 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m2rh6" event={"ID":"2bdff728-939b-414c-a0e9-35520fc54d71","Type":"ContainerDied","Data":"be961f4a5e499e181fb8596a9d2c33c279c0845124584c5f9f84a745dcf08df4"} Dec 02 13:59:55 crc kubenswrapper[4625]: I1202 13:59:55.633301 4625 generic.go:334] "Generic (PLEG): container finished" podID="2bdff728-939b-414c-a0e9-35520fc54d71" containerID="dcf8c0002dc38c4adfa63a6c2e50ed76a2f95d6689ecfff65cf0b2825012440e" exitCode=0 Dec 02 13:59:55 crc kubenswrapper[4625]: I1202 13:59:55.633455 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m2rh6" event={"ID":"2bdff728-939b-414c-a0e9-35520fc54d71","Type":"ContainerDied","Data":"dcf8c0002dc38c4adfa63a6c2e50ed76a2f95d6689ecfff65cf0b2825012440e"} Dec 02 13:59:56 crc kubenswrapper[4625]: I1202 13:59:56.643931 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m2rh6" event={"ID":"2bdff728-939b-414c-a0e9-35520fc54d71","Type":"ContainerStarted","Data":"6e789b2f003d7e2680af5d74119b83199c5f1f37653fff2205e779dfa49cee7b"} Dec 02 13:59:56 crc kubenswrapper[4625]: I1202 13:59:56.644286 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m2rh6" event={"ID":"2bdff728-939b-414c-a0e9-35520fc54d71","Type":"ContainerStarted","Data":"e3bb8229698349bbba375318d828e74d89b94da87e811b8b951eb2177a6f6a91"} Dec 02 13:59:56 crc kubenswrapper[4625]: I1202 13:59:56.647221 4625 generic.go:334] "Generic (PLEG): container finished" podID="c7f86937-9b63-4748-9d3c-117cc276c910" containerID="ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc" exitCode=0 Dec 02 13:59:56 crc kubenswrapper[4625]: I1202 13:59:56.647256 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5phf" event={"ID":"c7f86937-9b63-4748-9d3c-117cc276c910","Type":"ContainerDied","Data":"ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc"} Dec 02 13:59:57 crc kubenswrapper[4625]: I1202 13:59:57.669421 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5phf" event={"ID":"c7f86937-9b63-4748-9d3c-117cc276c910","Type":"ContainerStarted","Data":"6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac"} Dec 02 13:59:57 crc kubenswrapper[4625]: I1202 13:59:57.681194 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m2rh6" event={"ID":"2bdff728-939b-414c-a0e9-35520fc54d71","Type":"ContainerStarted","Data":"f51ca9228001cad0ebf112a59a605d728b10c1970fb1bf2dd61ab522fee55b24"} Dec 02 13:59:57 crc kubenswrapper[4625]: I1202 13:59:57.681294 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m2rh6" event={"ID":"2bdff728-939b-414c-a0e9-35520fc54d71","Type":"ContainerStarted","Data":"feabf0413507544a6ed220470473104757557ff2bb5d0bca2cfee89201fd558e"} Dec 02 13:59:57 crc kubenswrapper[4625]: I1202 13:59:57.681307 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m2rh6" event={"ID":"2bdff728-939b-414c-a0e9-35520fc54d71","Type":"ContainerStarted","Data":"f3699e33f4984896b538ac3d2284d3cf53b1defdca4b13d77e7d1500216d2c7b"} Dec 02 13:59:57 crc kubenswrapper[4625]: I1202 13:59:57.746606 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g5phf" podStartSLOduration=3.139471189 podStartE2EDuration="5.746583122s" podCreationTimestamp="2025-12-02 13:59:52 +0000 UTC" firstStartedPulling="2025-12-02 13:59:54.619051991 +0000 UTC m=+950.581229066" lastFinishedPulling="2025-12-02 13:59:57.226163924 +0000 UTC m=+953.188340999" observedRunningTime="2025-12-02 13:59:57.740975807 +0000 UTC m=+953.703152882" watchObservedRunningTime="2025-12-02 13:59:57.746583122 +0000 UTC m=+953.708760217" Dec 02 13:59:58 crc kubenswrapper[4625]: I1202 13:59:58.695005 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m2rh6" event={"ID":"2bdff728-939b-414c-a0e9-35520fc54d71","Type":"ContainerStarted","Data":"be0bb5bd2690cc514a50fad52dbbe4282d34867d1e479ee73bd94c63ac719e27"} Dec 02 13:59:58 crc kubenswrapper[4625]: I1202 13:59:58.695243 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-m2rh6" Dec 02 13:59:58 crc kubenswrapper[4625]: I1202 13:59:58.724140 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-m2rh6" podStartSLOduration=7.347801824 podStartE2EDuration="19.724115271s" podCreationTimestamp="2025-12-02 13:59:39 +0000 UTC" firstStartedPulling="2025-12-02 13:59:40.420335402 +0000 UTC m=+936.382512477" lastFinishedPulling="2025-12-02 13:59:52.796648849 +0000 UTC m=+948.758825924" observedRunningTime="2025-12-02 13:59:58.720526152 +0000 UTC m=+954.682703247" watchObservedRunningTime="2025-12-02 13:59:58.724115271 +0000 UTC m=+954.686292346" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.144497 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t"] Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.145574 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.149526 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.150427 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.160759 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t"] Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.162337 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c17bcfde-3d1b-407e-83f8-b9a9640c7108-secret-volume\") pod \"collect-profiles-29411400-9j88t\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.162424 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbkh4\" (UniqueName: \"kubernetes.io/projected/c17bcfde-3d1b-407e-83f8-b9a9640c7108-kube-api-access-pbkh4\") pod \"collect-profiles-29411400-9j88t\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.162470 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c17bcfde-3d1b-407e-83f8-b9a9640c7108-config-volume\") pod \"collect-profiles-29411400-9j88t\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.200079 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-m2rh6" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.250846 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-m2rh6" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.263925 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c17bcfde-3d1b-407e-83f8-b9a9640c7108-config-volume\") pod \"collect-profiles-29411400-9j88t\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.264024 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c17bcfde-3d1b-407e-83f8-b9a9640c7108-secret-volume\") pod \"collect-profiles-29411400-9j88t\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.264067 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbkh4\" (UniqueName: \"kubernetes.io/projected/c17bcfde-3d1b-407e-83f8-b9a9640c7108-kube-api-access-pbkh4\") pod \"collect-profiles-29411400-9j88t\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.270806 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c17bcfde-3d1b-407e-83f8-b9a9640c7108-config-volume\") pod \"collect-profiles-29411400-9j88t\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.283601 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c17bcfde-3d1b-407e-83f8-b9a9640c7108-secret-volume\") pod \"collect-profiles-29411400-9j88t\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.286367 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbkh4\" (UniqueName: \"kubernetes.io/projected/c17bcfde-3d1b-407e-83f8-b9a9640c7108-kube-api-access-pbkh4\") pod \"collect-profiles-29411400-9j88t\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.417587 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-2jqbl" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.466355 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:00 crc kubenswrapper[4625]: I1202 14:00:00.872814 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t"] Dec 02 14:00:01 crc kubenswrapper[4625]: I1202 14:00:01.713977 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" event={"ID":"c17bcfde-3d1b-407e-83f8-b9a9640c7108","Type":"ContainerStarted","Data":"0a043ad81bd0450cf2f39c00cb75de4dda57ab38f3fb62ffaa4951a2760e67e7"} Dec 02 14:00:01 crc kubenswrapper[4625]: I1202 14:00:01.714029 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" event={"ID":"c17bcfde-3d1b-407e-83f8-b9a9640c7108","Type":"ContainerStarted","Data":"d211beb4199f6cb8e99f16498f2f19355d15e81348253eb47cd148ae27358e79"} Dec 02 14:00:01 crc kubenswrapper[4625]: I1202 14:00:01.745418 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" podStartSLOduration=1.745387711 podStartE2EDuration="1.745387711s" podCreationTimestamp="2025-12-02 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:00:01.738524393 +0000 UTC m=+957.700701468" watchObservedRunningTime="2025-12-02 14:00:01.745387711 +0000 UTC m=+957.707564856" Dec 02 14:00:01 crc kubenswrapper[4625]: I1202 14:00:01.809498 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-vrxh6" Dec 02 14:00:02 crc kubenswrapper[4625]: I1202 14:00:02.722611 4625 generic.go:334] "Generic (PLEG): container finished" podID="c17bcfde-3d1b-407e-83f8-b9a9640c7108" containerID="0a043ad81bd0450cf2f39c00cb75de4dda57ab38f3fb62ffaa4951a2760e67e7" exitCode=0 Dec 02 14:00:02 crc kubenswrapper[4625]: I1202 14:00:02.722710 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" event={"ID":"c17bcfde-3d1b-407e-83f8-b9a9640c7108","Type":"ContainerDied","Data":"0a043ad81bd0450cf2f39c00cb75de4dda57ab38f3fb62ffaa4951a2760e67e7"} Dec 02 14:00:03 crc kubenswrapper[4625]: I1202 14:00:03.099245 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 14:00:03 crc kubenswrapper[4625]: I1202 14:00:03.099295 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 14:00:03 crc kubenswrapper[4625]: I1202 14:00:03.142002 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 14:00:03 crc kubenswrapper[4625]: I1202 14:00:03.841559 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 14:00:03 crc kubenswrapper[4625]: I1202 14:00:03.889556 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5phf"] Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.109199 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.247902 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbkh4\" (UniqueName: \"kubernetes.io/projected/c17bcfde-3d1b-407e-83f8-b9a9640c7108-kube-api-access-pbkh4\") pod \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.248002 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c17bcfde-3d1b-407e-83f8-b9a9640c7108-config-volume\") pod \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.248087 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c17bcfde-3d1b-407e-83f8-b9a9640c7108-secret-volume\") pod \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\" (UID: \"c17bcfde-3d1b-407e-83f8-b9a9640c7108\") " Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.249035 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c17bcfde-3d1b-407e-83f8-b9a9640c7108-config-volume" (OuterVolumeSpecName: "config-volume") pod "c17bcfde-3d1b-407e-83f8-b9a9640c7108" (UID: "c17bcfde-3d1b-407e-83f8-b9a9640c7108"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.249473 4625 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c17bcfde-3d1b-407e-83f8-b9a9640c7108-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.253219 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17bcfde-3d1b-407e-83f8-b9a9640c7108-kube-api-access-pbkh4" (OuterVolumeSpecName: "kube-api-access-pbkh4") pod "c17bcfde-3d1b-407e-83f8-b9a9640c7108" (UID: "c17bcfde-3d1b-407e-83f8-b9a9640c7108"). InnerVolumeSpecName "kube-api-access-pbkh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.253867 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17bcfde-3d1b-407e-83f8-b9a9640c7108-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c17bcfde-3d1b-407e-83f8-b9a9640c7108" (UID: "c17bcfde-3d1b-407e-83f8-b9a9640c7108"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.350345 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbkh4\" (UniqueName: \"kubernetes.io/projected/c17bcfde-3d1b-407e-83f8-b9a9640c7108-kube-api-access-pbkh4\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.350387 4625 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c17bcfde-3d1b-407e-83f8-b9a9640c7108-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.741630 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" event={"ID":"c17bcfde-3d1b-407e-83f8-b9a9640c7108","Type":"ContainerDied","Data":"d211beb4199f6cb8e99f16498f2f19355d15e81348253eb47cd148ae27358e79"} Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.741689 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t" Dec 02 14:00:04 crc kubenswrapper[4625]: I1202 14:00:04.741696 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d211beb4199f6cb8e99f16498f2f19355d15e81348253eb47cd148ae27358e79" Dec 02 14:00:05 crc kubenswrapper[4625]: I1202 14:00:05.746119 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g5phf" podUID="c7f86937-9b63-4748-9d3c-117cc276c910" containerName="registry-server" containerID="cri-o://6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac" gracePeriod=2 Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.153502 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.174991 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rmt4\" (UniqueName: \"kubernetes.io/projected/c7f86937-9b63-4748-9d3c-117cc276c910-kube-api-access-5rmt4\") pod \"c7f86937-9b63-4748-9d3c-117cc276c910\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.175070 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-utilities\") pod \"c7f86937-9b63-4748-9d3c-117cc276c910\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.175182 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-catalog-content\") pod \"c7f86937-9b63-4748-9d3c-117cc276c910\" (UID: \"c7f86937-9b63-4748-9d3c-117cc276c910\") " Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.176128 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-utilities" (OuterVolumeSpecName: "utilities") pod "c7f86937-9b63-4748-9d3c-117cc276c910" (UID: "c7f86937-9b63-4748-9d3c-117cc276c910"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.179890 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f86937-9b63-4748-9d3c-117cc276c910-kube-api-access-5rmt4" (OuterVolumeSpecName: "kube-api-access-5rmt4") pod "c7f86937-9b63-4748-9d3c-117cc276c910" (UID: "c7f86937-9b63-4748-9d3c-117cc276c910"). InnerVolumeSpecName "kube-api-access-5rmt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.196661 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7f86937-9b63-4748-9d3c-117cc276c910" (UID: "c7f86937-9b63-4748-9d3c-117cc276c910"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.276303 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.276676 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rmt4\" (UniqueName: \"kubernetes.io/projected/c7f86937-9b63-4748-9d3c-117cc276c910-kube-api-access-5rmt4\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.276825 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7f86937-9b63-4748-9d3c-117cc276c910-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.753827 4625 generic.go:334] "Generic (PLEG): container finished" podID="c7f86937-9b63-4748-9d3c-117cc276c910" containerID="6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac" exitCode=0 Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.753889 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5phf" event={"ID":"c7f86937-9b63-4748-9d3c-117cc276c910","Type":"ContainerDied","Data":"6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac"} Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.753961 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5phf" event={"ID":"c7f86937-9b63-4748-9d3c-117cc276c910","Type":"ContainerDied","Data":"0109d20df49d269d02f819a07b9ba65df1e7e677c153e65504d412e27bd055a7"} Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.754000 4625 scope.go:117] "RemoveContainer" containerID="6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.753910 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5phf" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.771909 4625 scope.go:117] "RemoveContainer" containerID="ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.783378 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5phf"] Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.793625 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5phf"] Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.796572 4625 scope.go:117] "RemoveContainer" containerID="f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.810352 4625 scope.go:117] "RemoveContainer" containerID="6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac" Dec 02 14:00:06 crc kubenswrapper[4625]: E1202 14:00:06.810865 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac\": container with ID starting with 6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac not found: ID does not exist" containerID="6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.810930 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac"} err="failed to get container status \"6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac\": rpc error: code = NotFound desc = could not find container \"6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac\": container with ID starting with 6b0d3aae937deeeab682f952eaeae7ba76c958ff05dc0d004e511e0f3bef2fac not found: ID does not exist" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.810959 4625 scope.go:117] "RemoveContainer" containerID="ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc" Dec 02 14:00:06 crc kubenswrapper[4625]: E1202 14:00:06.811443 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc\": container with ID starting with ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc not found: ID does not exist" containerID="ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.811470 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc"} err="failed to get container status \"ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc\": rpc error: code = NotFound desc = could not find container \"ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc\": container with ID starting with ee1609e553f2736ad1346ea7fdc36e300edfded03ddacef573aa1baef3e352bc not found: ID does not exist" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.811493 4625 scope.go:117] "RemoveContainer" containerID="f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c" Dec 02 14:00:06 crc kubenswrapper[4625]: E1202 14:00:06.811778 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c\": container with ID starting with f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c not found: ID does not exist" containerID="f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.811807 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c"} err="failed to get container status \"f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c\": rpc error: code = NotFound desc = could not find container \"f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c\": container with ID starting with f76b324e36da927e363553a2aec84c10bc82e03e8225381b153a3ef2f3131a7c not found: ID does not exist" Dec 02 14:00:06 crc kubenswrapper[4625]: I1202 14:00:06.865765 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f86937-9b63-4748-9d3c-117cc276c910" path="/var/lib/kubelet/pods/c7f86937-9b63-4748-9d3c-117cc276c910/volumes" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.387676 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gdt46"] Dec 02 14:00:07 crc kubenswrapper[4625]: E1202 14:00:07.387972 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f86937-9b63-4748-9d3c-117cc276c910" containerName="extract-utilities" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.387986 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f86937-9b63-4748-9d3c-117cc276c910" containerName="extract-utilities" Dec 02 14:00:07 crc kubenswrapper[4625]: E1202 14:00:07.388006 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f86937-9b63-4748-9d3c-117cc276c910" containerName="extract-content" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.388015 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f86937-9b63-4748-9d3c-117cc276c910" containerName="extract-content" Dec 02 14:00:07 crc kubenswrapper[4625]: E1202 14:00:07.388027 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f86937-9b63-4748-9d3c-117cc276c910" containerName="registry-server" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.388035 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f86937-9b63-4748-9d3c-117cc276c910" containerName="registry-server" Dec 02 14:00:07 crc kubenswrapper[4625]: E1202 14:00:07.388046 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17bcfde-3d1b-407e-83f8-b9a9640c7108" containerName="collect-profiles" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.388053 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17bcfde-3d1b-407e-83f8-b9a9640c7108" containerName="collect-profiles" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.388185 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17bcfde-3d1b-407e-83f8-b9a9640c7108" containerName="collect-profiles" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.388196 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f86937-9b63-4748-9d3c-117cc276c910" containerName="registry-server" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.388703 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gdt46" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.390584 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-zr9z8" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.391159 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.395128 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.398292 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gdt46"] Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.493959 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlszq\" (UniqueName: \"kubernetes.io/projected/e7071f61-c3c4-4f5a-b3ea-5fc268f55dbd-kube-api-access-nlszq\") pod \"openstack-operator-index-gdt46\" (UID: \"e7071f61-c3c4-4f5a-b3ea-5fc268f55dbd\") " pod="openstack-operators/openstack-operator-index-gdt46" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.595795 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlszq\" (UniqueName: \"kubernetes.io/projected/e7071f61-c3c4-4f5a-b3ea-5fc268f55dbd-kube-api-access-nlszq\") pod \"openstack-operator-index-gdt46\" (UID: \"e7071f61-c3c4-4f5a-b3ea-5fc268f55dbd\") " pod="openstack-operators/openstack-operator-index-gdt46" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.614241 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlszq\" (UniqueName: \"kubernetes.io/projected/e7071f61-c3c4-4f5a-b3ea-5fc268f55dbd-kube-api-access-nlszq\") pod \"openstack-operator-index-gdt46\" (UID: \"e7071f61-c3c4-4f5a-b3ea-5fc268f55dbd\") " pod="openstack-operators/openstack-operator-index-gdt46" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.706759 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gdt46" Dec 02 14:00:07 crc kubenswrapper[4625]: I1202 14:00:07.972089 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gdt46"] Dec 02 14:00:08 crc kubenswrapper[4625]: I1202 14:00:08.783066 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gdt46" event={"ID":"e7071f61-c3c4-4f5a-b3ea-5fc268f55dbd","Type":"ContainerStarted","Data":"c30966a9f8f94ac40b097cfbc33a4ce01291b58468147f3e9c3c3bdf5732a03a"} Dec 02 14:00:10 crc kubenswrapper[4625]: I1202 14:00:10.204947 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-m2rh6" Dec 02 14:00:10 crc kubenswrapper[4625]: I1202 14:00:10.820763 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" Dec 02 14:00:12 crc kubenswrapper[4625]: I1202 14:00:12.820573 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gdt46" event={"ID":"e7071f61-c3c4-4f5a-b3ea-5fc268f55dbd","Type":"ContainerStarted","Data":"9a34f5dbfe49b3ba7d346f643a2015ffedac83bf455a2a145131de637852188d"} Dec 02 14:00:12 crc kubenswrapper[4625]: I1202 14:00:12.838776 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gdt46" podStartSLOduration=1.197348393 podStartE2EDuration="5.838754258s" podCreationTimestamp="2025-12-02 14:00:07 +0000 UTC" firstStartedPulling="2025-12-02 14:00:07.988573027 +0000 UTC m=+963.950750102" lastFinishedPulling="2025-12-02 14:00:12.629978892 +0000 UTC m=+968.592155967" observedRunningTime="2025-12-02 14:00:12.837630018 +0000 UTC m=+968.799807103" watchObservedRunningTime="2025-12-02 14:00:12.838754258 +0000 UTC m=+968.800931333" Dec 02 14:00:17 crc kubenswrapper[4625]: I1202 14:00:17.708222 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gdt46" Dec 02 14:00:17 crc kubenswrapper[4625]: I1202 14:00:17.708805 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gdt46" Dec 02 14:00:17 crc kubenswrapper[4625]: I1202 14:00:17.744837 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gdt46" Dec 02 14:00:17 crc kubenswrapper[4625]: I1202 14:00:17.888034 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gdt46" Dec 02 14:00:18 crc kubenswrapper[4625]: I1202 14:00:18.830199 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk"] Dec 02 14:00:18 crc kubenswrapper[4625]: I1202 14:00:18.832260 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:18 crc kubenswrapper[4625]: I1202 14:00:18.834297 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-t26wn" Dec 02 14:00:18 crc kubenswrapper[4625]: I1202 14:00:18.844196 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk"] Dec 02 14:00:18 crc kubenswrapper[4625]: I1202 14:00:18.949813 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5tj\" (UniqueName: \"kubernetes.io/projected/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-kube-api-access-hj5tj\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:18 crc kubenswrapper[4625]: I1202 14:00:18.949930 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-bundle\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:18 crc kubenswrapper[4625]: I1202 14:00:18.949956 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-util\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.051580 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-bundle\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.051640 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-util\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.051744 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj5tj\" (UniqueName: \"kubernetes.io/projected/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-kube-api-access-hj5tj\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.052219 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-bundle\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.052553 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-util\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.071798 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj5tj\" (UniqueName: \"kubernetes.io/projected/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-kube-api-access-hj5tj\") pod \"75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.154679 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.342300 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.342368 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.837015 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk"] Dec 02 14:00:19 crc kubenswrapper[4625]: W1202 14:00:19.845941 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7a86a59_0433_4fd8_95b8_f1ca65eeaba8.slice/crio-56362c5cfae42b2b04a69c2be5eb561b4117f2371b0c11a739a73473bea6b328 WatchSource:0}: Error finding container 56362c5cfae42b2b04a69c2be5eb561b4117f2371b0c11a739a73473bea6b328: Status 404 returned error can't find the container with id 56362c5cfae42b2b04a69c2be5eb561b4117f2371b0c11a739a73473bea6b328 Dec 02 14:00:19 crc kubenswrapper[4625]: I1202 14:00:19.862967 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" event={"ID":"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8","Type":"ContainerStarted","Data":"56362c5cfae42b2b04a69c2be5eb561b4117f2371b0c11a739a73473bea6b328"} Dec 02 14:00:20 crc kubenswrapper[4625]: I1202 14:00:20.873574 4625 generic.go:334] "Generic (PLEG): container finished" podID="f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" containerID="45e295703960a1ebe6d85a054d9fbfe763a3bfd69dde9ab2e20171e5e5d60cd8" exitCode=0 Dec 02 14:00:20 crc kubenswrapper[4625]: I1202 14:00:20.873693 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" event={"ID":"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8","Type":"ContainerDied","Data":"45e295703960a1ebe6d85a054d9fbfe763a3bfd69dde9ab2e20171e5e5d60cd8"} Dec 02 14:00:21 crc kubenswrapper[4625]: I1202 14:00:21.883544 4625 generic.go:334] "Generic (PLEG): container finished" podID="f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" containerID="26e032229f32e016158e0185277e42b7645497ec0c72ee4d8e1657a75ee4568b" exitCode=0 Dec 02 14:00:21 crc kubenswrapper[4625]: I1202 14:00:21.883605 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" event={"ID":"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8","Type":"ContainerDied","Data":"26e032229f32e016158e0185277e42b7645497ec0c72ee4d8e1657a75ee4568b"} Dec 02 14:00:22 crc kubenswrapper[4625]: I1202 14:00:22.895959 4625 generic.go:334] "Generic (PLEG): container finished" podID="f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" containerID="3f58529dd93aea6c0f3422032b96464a825c6382b32b01b49befa25e626446f3" exitCode=0 Dec 02 14:00:22 crc kubenswrapper[4625]: I1202 14:00:22.896025 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" event={"ID":"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8","Type":"ContainerDied","Data":"3f58529dd93aea6c0f3422032b96464a825c6382b32b01b49befa25e626446f3"} Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.173656 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.329265 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj5tj\" (UniqueName: \"kubernetes.io/projected/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-kube-api-access-hj5tj\") pod \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.329419 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-util\") pod \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.332377 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-bundle" (OuterVolumeSpecName: "bundle") pod "f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" (UID: "f7a86a59-0433-4fd8-95b8-f1ca65eeaba8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.332707 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-bundle\") pod \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\" (UID: \"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8\") " Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.333533 4625 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.335725 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-kube-api-access-hj5tj" (OuterVolumeSpecName: "kube-api-access-hj5tj") pod "f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" (UID: "f7a86a59-0433-4fd8-95b8-f1ca65eeaba8"). InnerVolumeSpecName "kube-api-access-hj5tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.343723 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-util" (OuterVolumeSpecName: "util") pod "f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" (UID: "f7a86a59-0433-4fd8-95b8-f1ca65eeaba8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.445842 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj5tj\" (UniqueName: \"kubernetes.io/projected/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-kube-api-access-hj5tj\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.445887 4625 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f7a86a59-0433-4fd8-95b8-f1ca65eeaba8-util\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.913238 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" event={"ID":"f7a86a59-0433-4fd8-95b8-f1ca65eeaba8","Type":"ContainerDied","Data":"56362c5cfae42b2b04a69c2be5eb561b4117f2371b0c11a739a73473bea6b328"} Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.913294 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk" Dec 02 14:00:24 crc kubenswrapper[4625]: I1202 14:00:24.913296 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56362c5cfae42b2b04a69c2be5eb561b4117f2371b0c11a739a73473bea6b328" Dec 02 14:00:27 crc kubenswrapper[4625]: I1202 14:00:27.815524 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2"] Dec 02 14:00:27 crc kubenswrapper[4625]: E1202 14:00:27.816679 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" containerName="pull" Dec 02 14:00:27 crc kubenswrapper[4625]: I1202 14:00:27.816712 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" containerName="pull" Dec 02 14:00:27 crc kubenswrapper[4625]: E1202 14:00:27.816730 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" containerName="util" Dec 02 14:00:27 crc kubenswrapper[4625]: I1202 14:00:27.816738 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" containerName="util" Dec 02 14:00:27 crc kubenswrapper[4625]: E1202 14:00:27.816782 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" containerName="extract" Dec 02 14:00:27 crc kubenswrapper[4625]: I1202 14:00:27.816795 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" containerName="extract" Dec 02 14:00:27 crc kubenswrapper[4625]: I1202 14:00:27.816974 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a86a59-0433-4fd8-95b8-f1ca65eeaba8" containerName="extract" Dec 02 14:00:27 crc kubenswrapper[4625]: I1202 14:00:27.820099 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2" Dec 02 14:00:27 crc kubenswrapper[4625]: I1202 14:00:27.824849 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-rv7f4" Dec 02 14:00:27 crc kubenswrapper[4625]: I1202 14:00:27.858938 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2"] Dec 02 14:00:27 crc kubenswrapper[4625]: I1202 14:00:27.991743 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mms7h"] Dec 02 14:00:27 crc kubenswrapper[4625]: I1202 14:00:27.992949 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.004091 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqlb\" (UniqueName: \"kubernetes.io/projected/aebd70d9-f01d-4141-bfc6-972472620c50-kube-api-access-mvqlb\") pod \"openstack-operator-controller-operator-84d58866d9-k5nd2\" (UID: \"aebd70d9-f01d-4141-bfc6-972472620c50\") " pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.005365 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mms7h"] Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.105987 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-utilities\") pod \"community-operators-mms7h\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.106078 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvqlb\" (UniqueName: \"kubernetes.io/projected/aebd70d9-f01d-4141-bfc6-972472620c50-kube-api-access-mvqlb\") pod \"openstack-operator-controller-operator-84d58866d9-k5nd2\" (UID: \"aebd70d9-f01d-4141-bfc6-972472620c50\") " pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.106183 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-catalog-content\") pod \"community-operators-mms7h\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.106264 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk7vw\" (UniqueName: \"kubernetes.io/projected/d542c9c0-9ad4-4c99-becc-6292488d034e-kube-api-access-lk7vw\") pod \"community-operators-mms7h\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.132466 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvqlb\" (UniqueName: \"kubernetes.io/projected/aebd70d9-f01d-4141-bfc6-972472620c50-kube-api-access-mvqlb\") pod \"openstack-operator-controller-operator-84d58866d9-k5nd2\" (UID: \"aebd70d9-f01d-4141-bfc6-972472620c50\") " pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.146687 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.207815 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-catalog-content\") pod \"community-operators-mms7h\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.208161 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk7vw\" (UniqueName: \"kubernetes.io/projected/d542c9c0-9ad4-4c99-becc-6292488d034e-kube-api-access-lk7vw\") pod \"community-operators-mms7h\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.208293 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-utilities\") pod \"community-operators-mms7h\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.208394 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-catalog-content\") pod \"community-operators-mms7h\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.208727 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-utilities\") pod \"community-operators-mms7h\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.232987 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk7vw\" (UniqueName: \"kubernetes.io/projected/d542c9c0-9ad4-4c99-becc-6292488d034e-kube-api-access-lk7vw\") pod \"community-operators-mms7h\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.315484 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.781206 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mms7h"] Dec 02 14:00:28 crc kubenswrapper[4625]: I1202 14:00:28.948609 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mms7h" event={"ID":"d542c9c0-9ad4-4c99-becc-6292488d034e","Type":"ContainerStarted","Data":"a6b5542350f82e987d213ba3cf2fc6e79bb0989242649b5e37c0544f6dfb36de"} Dec 02 14:00:29 crc kubenswrapper[4625]: W1202 14:00:29.091137 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaebd70d9_f01d_4141_bfc6_972472620c50.slice/crio-19b597a9def5883eb15fe2ba74dddbf5b42b9f31c1a9cfc6f015dbe97ee888a5 WatchSource:0}: Error finding container 19b597a9def5883eb15fe2ba74dddbf5b42b9f31c1a9cfc6f015dbe97ee888a5: Status 404 returned error can't find the container with id 19b597a9def5883eb15fe2ba74dddbf5b42b9f31c1a9cfc6f015dbe97ee888a5 Dec 02 14:00:29 crc kubenswrapper[4625]: I1202 14:00:29.092022 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2"] Dec 02 14:00:29 crc kubenswrapper[4625]: I1202 14:00:29.959632 4625 generic.go:334] "Generic (PLEG): container finished" podID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerID="90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681" exitCode=0 Dec 02 14:00:29 crc kubenswrapper[4625]: I1202 14:00:29.959833 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mms7h" event={"ID":"d542c9c0-9ad4-4c99-becc-6292488d034e","Type":"ContainerDied","Data":"90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681"} Dec 02 14:00:29 crc kubenswrapper[4625]: I1202 14:00:29.961934 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2" event={"ID":"aebd70d9-f01d-4141-bfc6-972472620c50","Type":"ContainerStarted","Data":"19b597a9def5883eb15fe2ba74dddbf5b42b9f31c1a9cfc6f015dbe97ee888a5"} Dec 02 14:00:38 crc kubenswrapper[4625]: I1202 14:00:38.173339 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mms7h" event={"ID":"d542c9c0-9ad4-4c99-becc-6292488d034e","Type":"ContainerStarted","Data":"0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146"} Dec 02 14:00:38 crc kubenswrapper[4625]: I1202 14:00:38.176557 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2" event={"ID":"aebd70d9-f01d-4141-bfc6-972472620c50","Type":"ContainerStarted","Data":"3686cb364585059b45bac0c545ac1813d4a7ff0edbeff16b18bc766185775f31"} Dec 02 14:00:38 crc kubenswrapper[4625]: I1202 14:00:38.176870 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2" Dec 02 14:00:38 crc kubenswrapper[4625]: I1202 14:00:38.229267 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2" podStartSLOduration=2.6113109850000003 podStartE2EDuration="11.229242438s" podCreationTimestamp="2025-12-02 14:00:27 +0000 UTC" firstStartedPulling="2025-12-02 14:00:29.094929027 +0000 UTC m=+985.057106102" lastFinishedPulling="2025-12-02 14:00:37.71286048 +0000 UTC m=+993.675037555" observedRunningTime="2025-12-02 14:00:38.221635239 +0000 UTC m=+994.183812314" watchObservedRunningTime="2025-12-02 14:00:38.229242438 +0000 UTC m=+994.191419513" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.184190 4625 generic.go:334] "Generic (PLEG): container finished" podID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerID="0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146" exitCode=0 Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.184270 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mms7h" event={"ID":"d542c9c0-9ad4-4c99-becc-6292488d034e","Type":"ContainerDied","Data":"0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146"} Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.196927 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nww9n"] Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.201550 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.215863 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-utilities\") pod \"certified-operators-nww9n\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.216037 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7rg\" (UniqueName: \"kubernetes.io/projected/015d5ce1-8720-4d4e-a628-9c72d1610cb2-kube-api-access-wf7rg\") pod \"certified-operators-nww9n\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.216117 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-catalog-content\") pod \"certified-operators-nww9n\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.217665 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nww9n"] Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.317667 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-utilities\") pod \"certified-operators-nww9n\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.317836 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7rg\" (UniqueName: \"kubernetes.io/projected/015d5ce1-8720-4d4e-a628-9c72d1610cb2-kube-api-access-wf7rg\") pod \"certified-operators-nww9n\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.317880 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-catalog-content\") pod \"certified-operators-nww9n\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.318215 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-utilities\") pod \"certified-operators-nww9n\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.318459 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-catalog-content\") pod \"certified-operators-nww9n\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.339998 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7rg\" (UniqueName: \"kubernetes.io/projected/015d5ce1-8720-4d4e-a628-9c72d1610cb2-kube-api-access-wf7rg\") pod \"certified-operators-nww9n\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:39 crc kubenswrapper[4625]: I1202 14:00:39.529628 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:40 crc kubenswrapper[4625]: I1202 14:00:40.296724 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mms7h" event={"ID":"d542c9c0-9ad4-4c99-becc-6292488d034e","Type":"ContainerStarted","Data":"04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92"} Dec 02 14:00:40 crc kubenswrapper[4625]: I1202 14:00:40.302475 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nww9n"] Dec 02 14:00:40 crc kubenswrapper[4625]: I1202 14:00:40.332160 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mms7h" podStartSLOduration=3.619061582 podStartE2EDuration="13.332119575s" podCreationTimestamp="2025-12-02 14:00:27 +0000 UTC" firstStartedPulling="2025-12-02 14:00:29.962455722 +0000 UTC m=+985.924632797" lastFinishedPulling="2025-12-02 14:00:39.675513715 +0000 UTC m=+995.637690790" observedRunningTime="2025-12-02 14:00:40.332049393 +0000 UTC m=+996.294226458" watchObservedRunningTime="2025-12-02 14:00:40.332119575 +0000 UTC m=+996.294296650" Dec 02 14:00:41 crc kubenswrapper[4625]: I1202 14:00:41.308638 4625 generic.go:334] "Generic (PLEG): container finished" podID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerID="0730dfdfc65139e7a9a9a85969d21ef6deb7b8b42cae8d28ac1f15aa426ad3b3" exitCode=0 Dec 02 14:00:41 crc kubenswrapper[4625]: I1202 14:00:41.308713 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nww9n" event={"ID":"015d5ce1-8720-4d4e-a628-9c72d1610cb2","Type":"ContainerDied","Data":"0730dfdfc65139e7a9a9a85969d21ef6deb7b8b42cae8d28ac1f15aa426ad3b3"} Dec 02 14:00:41 crc kubenswrapper[4625]: I1202 14:00:41.309013 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nww9n" event={"ID":"015d5ce1-8720-4d4e-a628-9c72d1610cb2","Type":"ContainerStarted","Data":"be2e3624c38eebecb95eb10414eae4ce95908f83243eb6ef612bc9fb06803e60"} Dec 02 14:00:43 crc kubenswrapper[4625]: I1202 14:00:43.387725 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nww9n" event={"ID":"015d5ce1-8720-4d4e-a628-9c72d1610cb2","Type":"ContainerStarted","Data":"f81f6fc4148668d4673b0fb05bf8d6e16d6fd974ac4006f93878e6a9c78e3c03"} Dec 02 14:00:47 crc kubenswrapper[4625]: I1202 14:00:47.418894 4625 generic.go:334] "Generic (PLEG): container finished" podID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerID="f81f6fc4148668d4673b0fb05bf8d6e16d6fd974ac4006f93878e6a9c78e3c03" exitCode=0 Dec 02 14:00:47 crc kubenswrapper[4625]: I1202 14:00:47.418979 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nww9n" event={"ID":"015d5ce1-8720-4d4e-a628-9c72d1610cb2","Type":"ContainerDied","Data":"f81f6fc4148668d4673b0fb05bf8d6e16d6fd974ac4006f93878e6a9c78e3c03"} Dec 02 14:00:48 crc kubenswrapper[4625]: I1202 14:00:48.151101 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-84d58866d9-k5nd2" Dec 02 14:00:48 crc kubenswrapper[4625]: I1202 14:00:48.315794 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:48 crc kubenswrapper[4625]: I1202 14:00:48.316359 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:48 crc kubenswrapper[4625]: I1202 14:00:48.367447 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:48 crc kubenswrapper[4625]: I1202 14:00:48.430476 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nww9n" event={"ID":"015d5ce1-8720-4d4e-a628-9c72d1610cb2","Type":"ContainerStarted","Data":"1e6c4ceefec97b42c426a9ee4e00cc82ebed5392397605274b1c27784e9266f4"} Dec 02 14:00:48 crc kubenswrapper[4625]: I1202 14:00:48.462574 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nww9n" podStartSLOduration=2.6283028379999998 podStartE2EDuration="9.462545453s" podCreationTimestamp="2025-12-02 14:00:39 +0000 UTC" firstStartedPulling="2025-12-02 14:00:41.31095289 +0000 UTC m=+997.273129965" lastFinishedPulling="2025-12-02 14:00:48.145195505 +0000 UTC m=+1004.107372580" observedRunningTime="2025-12-02 14:00:48.457479355 +0000 UTC m=+1004.419656450" watchObservedRunningTime="2025-12-02 14:00:48.462545453 +0000 UTC m=+1004.424722528" Dec 02 14:00:48 crc kubenswrapper[4625]: I1202 14:00:48.485558 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:49 crc kubenswrapper[4625]: I1202 14:00:49.271584 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:00:49 crc kubenswrapper[4625]: I1202 14:00:49.271794 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:00:49 crc kubenswrapper[4625]: I1202 14:00:49.530762 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:49 crc kubenswrapper[4625]: I1202 14:00:49.530853 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:50 crc kubenswrapper[4625]: I1202 14:00:50.583061 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nww9n" podUID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerName="registry-server" probeResult="failure" output=< Dec 02 14:00:50 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 14:00:50 crc kubenswrapper[4625]: > Dec 02 14:00:51 crc kubenswrapper[4625]: I1202 14:00:51.379407 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mms7h"] Dec 02 14:00:51 crc kubenswrapper[4625]: I1202 14:00:51.451428 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mms7h" podUID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerName="registry-server" containerID="cri-o://04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92" gracePeriod=2 Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.120522 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.207467 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-utilities\") pod \"d542c9c0-9ad4-4c99-becc-6292488d034e\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.207540 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-catalog-content\") pod \"d542c9c0-9ad4-4c99-becc-6292488d034e\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.207705 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk7vw\" (UniqueName: \"kubernetes.io/projected/d542c9c0-9ad4-4c99-becc-6292488d034e-kube-api-access-lk7vw\") pod \"d542c9c0-9ad4-4c99-becc-6292488d034e\" (UID: \"d542c9c0-9ad4-4c99-becc-6292488d034e\") " Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.209165 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-utilities" (OuterVolumeSpecName: "utilities") pod "d542c9c0-9ad4-4c99-becc-6292488d034e" (UID: "d542c9c0-9ad4-4c99-becc-6292488d034e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.226574 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d542c9c0-9ad4-4c99-becc-6292488d034e-kube-api-access-lk7vw" (OuterVolumeSpecName: "kube-api-access-lk7vw") pod "d542c9c0-9ad4-4c99-becc-6292488d034e" (UID: "d542c9c0-9ad4-4c99-becc-6292488d034e"). InnerVolumeSpecName "kube-api-access-lk7vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.313294 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk7vw\" (UniqueName: \"kubernetes.io/projected/d542c9c0-9ad4-4c99-becc-6292488d034e-kube-api-access-lk7vw\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.313352 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.336104 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d542c9c0-9ad4-4c99-becc-6292488d034e" (UID: "d542c9c0-9ad4-4c99-becc-6292488d034e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.416465 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d542c9c0-9ad4-4c99-becc-6292488d034e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.461371 4625 generic.go:334] "Generic (PLEG): container finished" podID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerID="04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92" exitCode=0 Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.461457 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mms7h" event={"ID":"d542c9c0-9ad4-4c99-becc-6292488d034e","Type":"ContainerDied","Data":"04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92"} Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.461880 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mms7h" event={"ID":"d542c9c0-9ad4-4c99-becc-6292488d034e","Type":"ContainerDied","Data":"a6b5542350f82e987d213ba3cf2fc6e79bb0989242649b5e37c0544f6dfb36de"} Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.461550 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mms7h" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.461937 4625 scope.go:117] "RemoveContainer" containerID="04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.486734 4625 scope.go:117] "RemoveContainer" containerID="0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.550501 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mms7h"] Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.585642 4625 scope.go:117] "RemoveContainer" containerID="90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.620105 4625 scope.go:117] "RemoveContainer" containerID="04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.621826 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mms7h"] Dec 02 14:00:52 crc kubenswrapper[4625]: E1202 14:00:52.625301 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92\": container with ID starting with 04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92 not found: ID does not exist" containerID="04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.626174 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92"} err="failed to get container status \"04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92\": rpc error: code = NotFound desc = could not find container \"04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92\": container with ID starting with 04cf987546f74fe2aa173598199d821df5c377472c353b183d125105b3111f92 not found: ID does not exist" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.626389 4625 scope.go:117] "RemoveContainer" containerID="0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146" Dec 02 14:00:52 crc kubenswrapper[4625]: E1202 14:00:52.627279 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146\": container with ID starting with 0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146 not found: ID does not exist" containerID="0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.627379 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146"} err="failed to get container status \"0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146\": rpc error: code = NotFound desc = could not find container \"0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146\": container with ID starting with 0e8dc4f56e67c8dbd2b9144bafe3f551c0797014e6f4c15f3a4479491ed61146 not found: ID does not exist" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.627422 4625 scope.go:117] "RemoveContainer" containerID="90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681" Dec 02 14:00:52 crc kubenswrapper[4625]: E1202 14:00:52.628170 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681\": container with ID starting with 90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681 not found: ID does not exist" containerID="90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.628259 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681"} err="failed to get container status \"90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681\": rpc error: code = NotFound desc = could not find container \"90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681\": container with ID starting with 90a93ccdc4dc99cf41d62e02b39fcf2d307373020ce3fb11e4320c8aa83fc681 not found: ID does not exist" Dec 02 14:00:52 crc kubenswrapper[4625]: I1202 14:00:52.864696 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d542c9c0-9ad4-4c99-becc-6292488d034e" path="/var/lib/kubelet/pods/d542c9c0-9ad4-4c99-becc-6292488d034e/volumes" Dec 02 14:00:59 crc kubenswrapper[4625]: I1202 14:00:59.690003 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:59 crc kubenswrapper[4625]: I1202 14:00:59.807504 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:00:59 crc kubenswrapper[4625]: I1202 14:00:59.947871 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nww9n"] Dec 02 14:01:01 crc kubenswrapper[4625]: I1202 14:01:01.530901 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nww9n" podUID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerName="registry-server" containerID="cri-o://1e6c4ceefec97b42c426a9ee4e00cc82ebed5392397605274b1c27784e9266f4" gracePeriod=2 Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.545036 4625 generic.go:334] "Generic (PLEG): container finished" podID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerID="1e6c4ceefec97b42c426a9ee4e00cc82ebed5392397605274b1c27784e9266f4" exitCode=0 Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.545451 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nww9n" event={"ID":"015d5ce1-8720-4d4e-a628-9c72d1610cb2","Type":"ContainerDied","Data":"1e6c4ceefec97b42c426a9ee4e00cc82ebed5392397605274b1c27784e9266f4"} Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.641479 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.832592 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-catalog-content\") pod \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.832680 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-utilities\") pod \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.833111 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf7rg\" (UniqueName: \"kubernetes.io/projected/015d5ce1-8720-4d4e-a628-9c72d1610cb2-kube-api-access-wf7rg\") pod \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\" (UID: \"015d5ce1-8720-4d4e-a628-9c72d1610cb2\") " Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.835797 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-utilities" (OuterVolumeSpecName: "utilities") pod "015d5ce1-8720-4d4e-a628-9c72d1610cb2" (UID: "015d5ce1-8720-4d4e-a628-9c72d1610cb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.849786 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015d5ce1-8720-4d4e-a628-9c72d1610cb2-kube-api-access-wf7rg" (OuterVolumeSpecName: "kube-api-access-wf7rg") pod "015d5ce1-8720-4d4e-a628-9c72d1610cb2" (UID: "015d5ce1-8720-4d4e-a628-9c72d1610cb2"). InnerVolumeSpecName "kube-api-access-wf7rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.894639 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "015d5ce1-8720-4d4e-a628-9c72d1610cb2" (UID: "015d5ce1-8720-4d4e-a628-9c72d1610cb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.935202 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf7rg\" (UniqueName: \"kubernetes.io/projected/015d5ce1-8720-4d4e-a628-9c72d1610cb2-kube-api-access-wf7rg\") on node \"crc\" DevicePath \"\"" Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.935260 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:01:02 crc kubenswrapper[4625]: I1202 14:01:02.935273 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/015d5ce1-8720-4d4e-a628-9c72d1610cb2-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:01:03 crc kubenswrapper[4625]: I1202 14:01:03.557801 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nww9n" event={"ID":"015d5ce1-8720-4d4e-a628-9c72d1610cb2","Type":"ContainerDied","Data":"be2e3624c38eebecb95eb10414eae4ce95908f83243eb6ef612bc9fb06803e60"} Dec 02 14:01:03 crc kubenswrapper[4625]: I1202 14:01:03.557918 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nww9n" Dec 02 14:01:03 crc kubenswrapper[4625]: I1202 14:01:03.558190 4625 scope.go:117] "RemoveContainer" containerID="1e6c4ceefec97b42c426a9ee4e00cc82ebed5392397605274b1c27784e9266f4" Dec 02 14:01:03 crc kubenswrapper[4625]: I1202 14:01:03.588065 4625 scope.go:117] "RemoveContainer" containerID="f81f6fc4148668d4673b0fb05bf8d6e16d6fd974ac4006f93878e6a9c78e3c03" Dec 02 14:01:03 crc kubenswrapper[4625]: I1202 14:01:03.623521 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nww9n"] Dec 02 14:01:03 crc kubenswrapper[4625]: I1202 14:01:03.623656 4625 scope.go:117] "RemoveContainer" containerID="0730dfdfc65139e7a9a9a85969d21ef6deb7b8b42cae8d28ac1f15aa426ad3b3" Dec 02 14:01:03 crc kubenswrapper[4625]: I1202 14:01:03.630536 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nww9n"] Dec 02 14:01:04 crc kubenswrapper[4625]: I1202 14:01:04.866591 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" path="/var/lib/kubelet/pods/015d5ce1-8720-4d4e-a628-9c72d1610cb2/volumes" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.236545 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47"] Dec 02 14:01:05 crc kubenswrapper[4625]: E1202 14:01:05.237506 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerName="extract-utilities" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.237528 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerName="extract-utilities" Dec 02 14:01:05 crc kubenswrapper[4625]: E1202 14:01:05.237541 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerName="extract-utilities" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.237550 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerName="extract-utilities" Dec 02 14:01:05 crc kubenswrapper[4625]: E1202 14:01:05.237571 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerName="registry-server" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.237581 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerName="registry-server" Dec 02 14:01:05 crc kubenswrapper[4625]: E1202 14:01:05.237598 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerName="extract-content" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.237605 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerName="extract-content" Dec 02 14:01:05 crc kubenswrapper[4625]: E1202 14:01:05.237615 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerName="extract-content" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.237623 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerName="extract-content" Dec 02 14:01:05 crc kubenswrapper[4625]: E1202 14:01:05.237638 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerName="registry-server" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.237646 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerName="registry-server" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.237808 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="015d5ce1-8720-4d4e-a628-9c72d1610cb2" containerName="registry-server" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.237830 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="d542c9c0-9ad4-4c99-becc-6292488d034e" containerName="registry-server" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.238780 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.241949 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jv9f9" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.265835 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.267209 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.269684 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-7skxq" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.271965 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.277792 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzzbn\" (UniqueName: \"kubernetes.io/projected/a1bf70dd-f5d1-45a9-94a9-86fffb0758b2-kube-api-access-gzzbn\") pod \"barbican-operator-controller-manager-7d9dfd778-vck47\" (UID: \"a1bf70dd-f5d1-45a9-94a9-86fffb0758b2\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.277908 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bl8g\" (UniqueName: \"kubernetes.io/projected/95cd9233-3d9c-45e1-ade0-6753a952b721-kube-api-access-6bl8g\") pod \"cinder-operator-controller-manager-55f4dbb9b7-bhlt2\" (UID: \"95cd9233-3d9c-45e1-ade0-6753a952b721\") " pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.300041 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.312471 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.313936 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.323954 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-65z6q" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.385203 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b579t\" (UniqueName: \"kubernetes.io/projected/d20c6701-017d-4f33-91f0-10199890032f-kube-api-access-b579t\") pod \"designate-operator-controller-manager-78b4bc895b-kl65q\" (UID: \"d20c6701-017d-4f33-91f0-10199890032f\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.385329 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzzbn\" (UniqueName: \"kubernetes.io/projected/a1bf70dd-f5d1-45a9-94a9-86fffb0758b2-kube-api-access-gzzbn\") pod \"barbican-operator-controller-manager-7d9dfd778-vck47\" (UID: \"a1bf70dd-f5d1-45a9-94a9-86fffb0758b2\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.385369 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bl8g\" (UniqueName: \"kubernetes.io/projected/95cd9233-3d9c-45e1-ade0-6753a952b721-kube-api-access-6bl8g\") pod \"cinder-operator-controller-manager-55f4dbb9b7-bhlt2\" (UID: \"95cd9233-3d9c-45e1-ade0-6753a952b721\") " pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.406429 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.409633 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.422639 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.426415 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.428079 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.432849 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-twcmd" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.433208 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-j54pv" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.451120 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.452840 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.459952 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.478939 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-556kt" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.491031 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bl8g\" (UniqueName: \"kubernetes.io/projected/95cd9233-3d9c-45e1-ade0-6753a952b721-kube-api-access-6bl8g\") pod \"cinder-operator-controller-manager-55f4dbb9b7-bhlt2\" (UID: \"95cd9233-3d9c-45e1-ade0-6753a952b721\") " pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.493690 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.498268 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzzbn\" (UniqueName: \"kubernetes.io/projected/a1bf70dd-f5d1-45a9-94a9-86fffb0758b2-kube-api-access-gzzbn\") pod \"barbican-operator-controller-manager-7d9dfd778-vck47\" (UID: \"a1bf70dd-f5d1-45a9-94a9-86fffb0758b2\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.502739 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87gpc\" (UniqueName: \"kubernetes.io/projected/15504c82-ed79-4ab3-a157-7493e0b13058-kube-api-access-87gpc\") pod \"glance-operator-controller-manager-77987cd8cd-q854l\" (UID: \"15504c82-ed79-4ab3-a157-7493e0b13058\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.502961 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b579t\" (UniqueName: \"kubernetes.io/projected/d20c6701-017d-4f33-91f0-10199890032f-kube-api-access-b579t\") pod \"designate-operator-controller-manager-78b4bc895b-kl65q\" (UID: \"d20c6701-017d-4f33-91f0-10199890032f\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.503119 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhcj5\" (UniqueName: \"kubernetes.io/projected/137e6ec9-76ad-4b65-a788-a8a38f84343f-kube-api-access-lhcj5\") pod \"heat-operator-controller-manager-5f64f6f8bb-jhwcz\" (UID: \"137e6ec9-76ad-4b65-a788-a8a38f84343f\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.503267 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqgtz\" (UniqueName: \"kubernetes.io/projected/95a50933-f183-45d5-b8e2-aac85155551e-kube-api-access-sqgtz\") pod \"horizon-operator-controller-manager-68c6d99b8f-b882s\" (UID: \"95a50933-f183-45d5-b8e2-aac85155551e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.558528 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.596045 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.612047 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhcj5\" (UniqueName: \"kubernetes.io/projected/137e6ec9-76ad-4b65-a788-a8a38f84343f-kube-api-access-lhcj5\") pod \"heat-operator-controller-manager-5f64f6f8bb-jhwcz\" (UID: \"137e6ec9-76ad-4b65-a788-a8a38f84343f\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.612199 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqgtz\" (UniqueName: \"kubernetes.io/projected/95a50933-f183-45d5-b8e2-aac85155551e-kube-api-access-sqgtz\") pod \"horizon-operator-controller-manager-68c6d99b8f-b882s\" (UID: \"95a50933-f183-45d5-b8e2-aac85155551e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.612243 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87gpc\" (UniqueName: \"kubernetes.io/projected/15504c82-ed79-4ab3-a157-7493e0b13058-kube-api-access-87gpc\") pod \"glance-operator-controller-manager-77987cd8cd-q854l\" (UID: \"15504c82-ed79-4ab3-a157-7493e0b13058\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.615933 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.616009 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-pr84z"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.617025 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b579t\" (UniqueName: \"kubernetes.io/projected/d20c6701-017d-4f33-91f0-10199890032f-kube-api-access-b579t\") pod \"designate-operator-controller-manager-78b4bc895b-kl65q\" (UID: \"d20c6701-017d-4f33-91f0-10199890032f\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.636368 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.653851 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.654138 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qbgv9" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.656431 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.694077 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhcj5\" (UniqueName: \"kubernetes.io/projected/137e6ec9-76ad-4b65-a788-a8a38f84343f-kube-api-access-lhcj5\") pod \"heat-operator-controller-manager-5f64f6f8bb-jhwcz\" (UID: \"137e6ec9-76ad-4b65-a788-a8a38f84343f\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.702855 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87gpc\" (UniqueName: \"kubernetes.io/projected/15504c82-ed79-4ab3-a157-7493e0b13058-kube-api-access-87gpc\") pod \"glance-operator-controller-manager-77987cd8cd-q854l\" (UID: \"15504c82-ed79-4ab3-a157-7493e0b13058\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.703984 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqgtz\" (UniqueName: \"kubernetes.io/projected/95a50933-f183-45d5-b8e2-aac85155551e-kube-api-access-sqgtz\") pod \"horizon-operator-controller-manager-68c6d99b8f-b882s\" (UID: \"95a50933-f183-45d5-b8e2-aac85155551e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.715409 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnf4\" (UniqueName: \"kubernetes.io/projected/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-kube-api-access-5wnf4\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.715450 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.748796 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.750279 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.754217 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-pr84z"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.757605 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-98bdm" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.817447 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.818548 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phd45\" (UniqueName: \"kubernetes.io/projected/a686420b-bad9-418e-b729-96680afd0f07-kube-api-access-phd45\") pod \"ironic-operator-controller-manager-6c548fd776-qtsvv\" (UID: \"a686420b-bad9-418e-b729-96680afd0f07\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.818667 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnf4\" (UniqueName: \"kubernetes.io/projected/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-kube-api-access-5wnf4\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.818694 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:05 crc kubenswrapper[4625]: E1202 14:01:05.818866 4625 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:01:05 crc kubenswrapper[4625]: E1202 14:01:05.818939 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert podName:73f97b4a-0c9b-4422-a7fd-e2aab20f9825 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:06.318914109 +0000 UTC m=+1022.281091184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert") pod "infra-operator-controller-manager-57548d458d-pr84z" (UID: "73f97b4a-0c9b-4422-a7fd-e2aab20f9825") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.836774 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.838167 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.840587 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.841597 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-bv66x" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.859799 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnf4\" (UniqueName: \"kubernetes.io/projected/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-kube-api-access-5wnf4\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.886160 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.961962 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phd45\" (UniqueName: \"kubernetes.io/projected/a686420b-bad9-418e-b729-96680afd0f07-kube-api-access-phd45\") pod \"ironic-operator-controller-manager-6c548fd776-qtsvv\" (UID: \"a686420b-bad9-418e-b729-96680afd0f07\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.962091 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnlc\" (UniqueName: \"kubernetes.io/projected/0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e-kube-api-access-nwnlc\") pod \"keystone-operator-controller-manager-7765d96ddf-k4nlb\" (UID: \"0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.967548 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.981213 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht"] Dec 02 14:01:05 crc kubenswrapper[4625]: I1202 14:01:05.987025 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.042166 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phd45\" (UniqueName: \"kubernetes.io/projected/a686420b-bad9-418e-b729-96680afd0f07-kube-api-access-phd45\") pod \"ironic-operator-controller-manager-6c548fd776-qtsvv\" (UID: \"a686420b-bad9-418e-b729-96680afd0f07\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.049758 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.054781 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-97bh6" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.096611 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.097954 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnlc\" (UniqueName: \"kubernetes.io/projected/0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e-kube-api-access-nwnlc\") pod \"keystone-operator-controller-manager-7765d96ddf-k4nlb\" (UID: \"0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.141047 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnlc\" (UniqueName: \"kubernetes.io/projected/0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e-kube-api-access-nwnlc\") pod \"keystone-operator-controller-manager-7765d96ddf-k4nlb\" (UID: \"0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.159348 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.161656 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.173266 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4bb4c" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.203622 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87dhd\" (UniqueName: \"kubernetes.io/projected/79d42122-959c-41e1-9c56-58788fd56100-kube-api-access-87dhd\") pod \"mariadb-operator-controller-manager-56bbcc9d85-gr5bt\" (UID: \"79d42122-959c-41e1-9c56-58788fd56100\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.203701 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqn7r\" (UniqueName: \"kubernetes.io/projected/5e58537e-7499-41c1-b154-ff06bd4dd58a-kube-api-access-dqn7r\") pod \"manila-operator-controller-manager-7c79b5df47-lwmht\" (UID: \"5e58537e-7499-41c1-b154-ff06bd4dd58a\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.212929 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.281414 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.306920 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87dhd\" (UniqueName: \"kubernetes.io/projected/79d42122-959c-41e1-9c56-58788fd56100-kube-api-access-87dhd\") pod \"mariadb-operator-controller-manager-56bbcc9d85-gr5bt\" (UID: \"79d42122-959c-41e1-9c56-58788fd56100\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.306998 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqn7r\" (UniqueName: \"kubernetes.io/projected/5e58537e-7499-41c1-b154-ff06bd4dd58a-kube-api-access-dqn7r\") pod \"manila-operator-controller-manager-7c79b5df47-lwmht\" (UID: \"5e58537e-7499-41c1-b154-ff06bd4dd58a\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.331633 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.332961 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87dhd\" (UniqueName: \"kubernetes.io/projected/79d42122-959c-41e1-9c56-58788fd56100-kube-api-access-87dhd\") pod \"mariadb-operator-controller-manager-56bbcc9d85-gr5bt\" (UID: \"79d42122-959c-41e1-9c56-58788fd56100\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.342196 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqn7r\" (UniqueName: \"kubernetes.io/projected/5e58537e-7499-41c1-b154-ff06bd4dd58a-kube-api-access-dqn7r\") pod \"manila-operator-controller-manager-7c79b5df47-lwmht\" (UID: \"5e58537e-7499-41c1-b154-ff06bd4dd58a\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.390749 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.412147 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" Dec 02 14:01:06 crc kubenswrapper[4625]: E1202 14:01:06.409576 4625 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:01:06 crc kubenswrapper[4625]: E1202 14:01:06.413715 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert podName:73f97b4a-0c9b-4422-a7fd-e2aab20f9825 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:07.413692431 +0000 UTC m=+1023.375869496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert") pod "infra-operator-controller-manager-57548d458d-pr84z" (UID: "73f97b4a-0c9b-4422-a7fd-e2aab20f9825") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.409473 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.414347 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.422136 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mk7d7" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.422181 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.443105 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mmg4r" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.458706 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.468441 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.517508 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-lg24z"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.518739 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.519745 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.532821 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ktt5p" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.534508 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrrm7\" (UniqueName: \"kubernetes.io/projected/910705f2-ee02-421a-a0eb-eb594d119f9e-kube-api-access-xrrm7\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jwks4\" (UID: \"910705f2-ee02-421a-a0eb-eb594d119f9e\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.534580 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwrk\" (UniqueName: \"kubernetes.io/projected/cc5f44ae-eba1-40ca-8391-49985c6211bd-kube-api-access-gfwrk\") pod \"nova-operator-controller-manager-697bc559fc-zk4xg\" (UID: \"cc5f44ae-eba1-40ca-8391-49985c6211bd\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.611329 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-lg24z"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.622730 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.642001 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q79r5\" (UniqueName: \"kubernetes.io/projected/d04e4d3e-b826-40ad-9955-7c7ba1379920-kube-api-access-q79r5\") pod \"octavia-operator-controller-manager-998648c74-lg24z\" (UID: \"d04e4d3e-b826-40ad-9955-7c7ba1379920\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.642192 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrrm7\" (UniqueName: \"kubernetes.io/projected/910705f2-ee02-421a-a0eb-eb594d119f9e-kube-api-access-xrrm7\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jwks4\" (UID: \"910705f2-ee02-421a-a0eb-eb594d119f9e\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.650161 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwrk\" (UniqueName: \"kubernetes.io/projected/cc5f44ae-eba1-40ca-8391-49985c6211bd-kube-api-access-gfwrk\") pod \"nova-operator-controller-manager-697bc559fc-zk4xg\" (UID: \"cc5f44ae-eba1-40ca-8391-49985c6211bd\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.666101 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.667559 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.674657 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nx6dh" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.716544 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.726473 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrrm7\" (UniqueName: \"kubernetes.io/projected/910705f2-ee02-421a-a0eb-eb594d119f9e-kube-api-access-xrrm7\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-jwks4\" (UID: \"910705f2-ee02-421a-a0eb-eb594d119f9e\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.733469 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwrk\" (UniqueName: \"kubernetes.io/projected/cc5f44ae-eba1-40ca-8391-49985c6211bd-kube-api-access-gfwrk\") pod \"nova-operator-controller-manager-697bc559fc-zk4xg\" (UID: \"cc5f44ae-eba1-40ca-8391-49985c6211bd\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.749092 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.757369 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5sdd8" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.761984 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.767160 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.768591 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.774883 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-b7ndg" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.786380 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66xsg\" (UniqueName: \"kubernetes.io/projected/a9612490-cbef-4040-a5f5-26737160de83-kube-api-access-66xsg\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.794683 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.796168 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.799839 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q79r5\" (UniqueName: \"kubernetes.io/projected/d04e4d3e-b826-40ad-9955-7c7ba1379920-kube-api-access-q79r5\") pod \"octavia-operator-controller-manager-998648c74-lg24z\" (UID: \"d04e4d3e-b826-40ad-9955-7c7ba1379920\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.800425 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.800692 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wvfm\" (UniqueName: \"kubernetes.io/projected/0f5a3014-4394-4a6f-972e-52f2ef19328f-kube-api-access-7wvfm\") pod \"ovn-operator-controller-manager-b6456fdb6-jktk5\" (UID: \"0f5a3014-4394-4a6f-972e-52f2ef19328f\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.812191 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.829412 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q79r5\" (UniqueName: \"kubernetes.io/projected/d04e4d3e-b826-40ad-9955-7c7ba1379920-kube-api-access-q79r5\") pod \"octavia-operator-controller-manager-998648c74-lg24z\" (UID: \"d04e4d3e-b826-40ad-9955-7c7ba1379920\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.836513 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck"] Dec 02 14:01:06 crc kubenswrapper[4625]: W1202 14:01:06.850219 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95cd9233_3d9c_45e1_ade0_6753a952b721.slice/crio-2db20fdf5a0e1da35e95795ea5408643d0dc75b3a1f63a4398b976c3cfec3fda WatchSource:0}: Error finding container 2db20fdf5a0e1da35e95795ea5408643d0dc75b3a1f63a4398b976c3cfec3fda: Status 404 returned error can't find the container with id 2db20fdf5a0e1da35e95795ea5408643d0dc75b3a1f63a4398b976c3cfec3fda Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.883196 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.884394 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.884415 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.885831 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.886083 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.892411 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b"] Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.892520 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.894662 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.905644 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-kbt6z" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.905935 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8v962" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.907232 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-frts9" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.910933 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66xsg\" (UniqueName: \"kubernetes.io/projected/a9612490-cbef-4040-a5f5-26737160de83-kube-api-access-66xsg\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.911009 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qbkf\" (UniqueName: \"kubernetes.io/projected/4dff1b74-d58f-40b9-a3a6-c1ebdd498690-kube-api-access-8qbkf\") pod \"placement-operator-controller-manager-78f8948974-qgjn7\" (UID: \"4dff1b74-d58f-40b9-a3a6-c1ebdd498690\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.911185 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.911227 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wvfm\" (UniqueName: \"kubernetes.io/projected/0f5a3014-4394-4a6f-972e-52f2ef19328f-kube-api-access-7wvfm\") pod \"ovn-operator-controller-manager-b6456fdb6-jktk5\" (UID: \"0f5a3014-4394-4a6f-972e-52f2ef19328f\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" Dec 02 14:01:06 crc kubenswrapper[4625]: E1202 14:01:06.911886 4625 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:01:06 crc kubenswrapper[4625]: E1202 14:01:06.911935 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert podName:a9612490-cbef-4040-a5f5-26737160de83 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:07.411918731 +0000 UTC m=+1023.374095806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" (UID: "a9612490-cbef-4040-a5f5-26737160de83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.984645 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wvfm\" (UniqueName: \"kubernetes.io/projected/0f5a3014-4394-4a6f-972e-52f2ef19328f-kube-api-access-7wvfm\") pod \"ovn-operator-controller-manager-b6456fdb6-jktk5\" (UID: \"0f5a3014-4394-4a6f-972e-52f2ef19328f\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" Dec 02 14:01:06 crc kubenswrapper[4625]: I1202 14:01:06.990968 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66xsg\" (UniqueName: \"kubernetes.io/projected/a9612490-cbef-4040-a5f5-26737160de83-kube-api-access-66xsg\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.005766 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.018056 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6cn8\" (UniqueName: \"kubernetes.io/projected/37a0be8e-736e-486e-a1af-abc65c34c25b-kube-api-access-x6cn8\") pod \"test-operator-controller-manager-5854674fcc-wqq4b\" (UID: \"37a0be8e-736e-486e-a1af-abc65c34c25b\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.018120 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qbkf\" (UniqueName: \"kubernetes.io/projected/4dff1b74-d58f-40b9-a3a6-c1ebdd498690-kube-api-access-8qbkf\") pod \"placement-operator-controller-manager-78f8948974-qgjn7\" (UID: \"4dff1b74-d58f-40b9-a3a6-c1ebdd498690\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.018198 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25t8v\" (UniqueName: \"kubernetes.io/projected/0daed4ec-6cef-4f70-bdf2-27c278868917-kube-api-access-25t8v\") pod \"swift-operator-controller-manager-5f8c65bbfc-ls4vx\" (UID: \"0daed4ec-6cef-4f70-bdf2-27c278868917\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.018299 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5fvg\" (UniqueName: \"kubernetes.io/projected/6075f378-b13f-422a-a3c0-3301d78d3fa9-kube-api-access-b5fvg\") pod \"telemetry-operator-controller-manager-76cc84c6bb-mmjnh\" (UID: \"6075f378-b13f-422a-a3c0-3301d78d3fa9\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.022255 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.041098 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.054898 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qbkf\" (UniqueName: \"kubernetes.io/projected/4dff1b74-d58f-40b9-a3a6-c1ebdd498690-kube-api-access-8qbkf\") pod \"placement-operator-controller-manager-78f8948974-qgjn7\" (UID: \"4dff1b74-d58f-40b9-a3a6-c1ebdd498690\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.077664 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.104653 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.117821 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.128857 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25t8v\" (UniqueName: \"kubernetes.io/projected/0daed4ec-6cef-4f70-bdf2-27c278868917-kube-api-access-25t8v\") pod \"swift-operator-controller-manager-5f8c65bbfc-ls4vx\" (UID: \"0daed4ec-6cef-4f70-bdf2-27c278868917\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.132400 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5fvg\" (UniqueName: \"kubernetes.io/projected/6075f378-b13f-422a-a3c0-3301d78d3fa9-kube-api-access-b5fvg\") pod \"telemetry-operator-controller-manager-76cc84c6bb-mmjnh\" (UID: \"6075f378-b13f-422a-a3c0-3301d78d3fa9\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.133155 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6cn8\" (UniqueName: \"kubernetes.io/projected/37a0be8e-736e-486e-a1af-abc65c34c25b-kube-api-access-x6cn8\") pod \"test-operator-controller-manager-5854674fcc-wqq4b\" (UID: \"37a0be8e-736e-486e-a1af-abc65c34c25b\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.140653 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.174975 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6cn8\" (UniqueName: \"kubernetes.io/projected/37a0be8e-736e-486e-a1af-abc65c34c25b-kube-api-access-x6cn8\") pod \"test-operator-controller-manager-5854674fcc-wqq4b\" (UID: \"37a0be8e-736e-486e-a1af-abc65c34c25b\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.179699 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2qc8p" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.179951 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.187392 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25t8v\" (UniqueName: \"kubernetes.io/projected/0daed4ec-6cef-4f70-bdf2-27c278868917-kube-api-access-25t8v\") pod \"swift-operator-controller-manager-5f8c65bbfc-ls4vx\" (UID: \"0daed4ec-6cef-4f70-bdf2-27c278868917\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.188432 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5fvg\" (UniqueName: \"kubernetes.io/projected/6075f378-b13f-422a-a3c0-3301d78d3fa9-kube-api-access-b5fvg\") pod \"telemetry-operator-controller-manager-76cc84c6bb-mmjnh\" (UID: \"6075f378-b13f-422a-a3c0-3301d78d3fa9\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.188498 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.222280 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.228208 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.228287 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5njt2" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.230391 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.249395 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.256195 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.324775 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.328096 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.340172 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csb9j\" (UniqueName: \"kubernetes.io/projected/db262c5c-48c7-4749-990c-77993791ba47-kube-api-access-csb9j\") pod \"watcher-operator-controller-manager-769dc69bc-wpdzp\" (UID: \"db262c5c-48c7-4749-990c-77993791ba47\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.340248 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5nc\" (UniqueName: \"kubernetes.io/projected/e3cfbc8e-665f-4007-a38d-714f53c48923-kube-api-access-df5nc\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.340269 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.340363 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.377073 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.378027 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.382048 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jdjc7" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.388723 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.416356 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.442259 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df5nc\" (UniqueName: \"kubernetes.io/projected/e3cfbc8e-665f-4007-a38d-714f53c48923-kube-api-access-df5nc\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.442304 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.442365 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.442395 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.442442 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.442521 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csb9j\" (UniqueName: \"kubernetes.io/projected/db262c5c-48c7-4749-990c-77993791ba47-kube-api-access-csb9j\") pod \"watcher-operator-controller-manager-769dc69bc-wpdzp\" (UID: \"db262c5c-48c7-4749-990c-77993791ba47\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" Dec 02 14:01:07 crc kubenswrapper[4625]: E1202 14:01:07.442935 4625 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:01:07 crc kubenswrapper[4625]: E1202 14:01:07.442973 4625 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:01:07 crc kubenswrapper[4625]: E1202 14:01:07.443009 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert podName:73f97b4a-0c9b-4422-a7fd-e2aab20f9825 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:09.442987972 +0000 UTC m=+1025.405165097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert") pod "infra-operator-controller-manager-57548d458d-pr84z" (UID: "73f97b4a-0c9b-4422-a7fd-e2aab20f9825") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:01:07 crc kubenswrapper[4625]: E1202 14:01:07.443033 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs podName:e3cfbc8e-665f-4007-a38d-714f53c48923 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:07.943023773 +0000 UTC m=+1023.905200948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-xpfd6" (UID: "e3cfbc8e-665f-4007-a38d-714f53c48923") : secret "webhook-server-cert" not found Dec 02 14:01:07 crc kubenswrapper[4625]: E1202 14:01:07.442935 4625 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:01:07 crc kubenswrapper[4625]: E1202 14:01:07.443041 4625 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:01:07 crc kubenswrapper[4625]: E1202 14:01:07.443067 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert podName:a9612490-cbef-4040-a5f5-26737160de83 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:08.443060284 +0000 UTC m=+1024.405237459 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" (UID: "a9612490-cbef-4040-a5f5-26737160de83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:01:07 crc kubenswrapper[4625]: E1202 14:01:07.443093 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs podName:e3cfbc8e-665f-4007-a38d-714f53c48923 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:07.943076844 +0000 UTC m=+1023.905253919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-xpfd6" (UID: "e3cfbc8e-665f-4007-a38d-714f53c48923") : secret "metrics-server-cert" not found Dec 02 14:01:07 crc kubenswrapper[4625]: W1202 14:01:07.515269 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95a50933_f183_45d5_b8e2_aac85155551e.slice/crio-0914ca21863b06c2852cbfd2c38177b2f4726684ecc78b246af96d1680c5f88b WatchSource:0}: Error finding container 0914ca21863b06c2852cbfd2c38177b2f4726684ecc78b246af96d1680c5f88b: Status 404 returned error can't find the container with id 0914ca21863b06c2852cbfd2c38177b2f4726684ecc78b246af96d1680c5f88b Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.515635 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csb9j\" (UniqueName: \"kubernetes.io/projected/db262c5c-48c7-4749-990c-77993791ba47-kube-api-access-csb9j\") pod \"watcher-operator-controller-manager-769dc69bc-wpdzp\" (UID: \"db262c5c-48c7-4749-990c-77993791ba47\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.515793 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df5nc\" (UniqueName: \"kubernetes.io/projected/e3cfbc8e-665f-4007-a38d-714f53c48923-kube-api-access-df5nc\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.530223 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.554158 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.587475 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdmbc\" (UniqueName: \"kubernetes.io/projected/38eaf493-09d1-441e-81a9-777174f24006-kube-api-access-hdmbc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7lmwf\" (UID: \"38eaf493-09d1-441e-81a9-777174f24006\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.673533 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.696850 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdmbc\" (UniqueName: \"kubernetes.io/projected/38eaf493-09d1-441e-81a9-777174f24006-kube-api-access-hdmbc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7lmwf\" (UID: \"38eaf493-09d1-441e-81a9-777174f24006\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf" Dec 02 14:01:07 crc kubenswrapper[4625]: W1202 14:01:07.714928 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fbe84bd_4dc3_4f2c_b890_16a1b15f4d0e.slice/crio-b799e990c5bfe555f58a1353066165e34c1207c493434136f5636bd4f2593516 WatchSource:0}: Error finding container b799e990c5bfe555f58a1353066165e34c1207c493434136f5636bd4f2593516: Status 404 returned error can't find the container with id b799e990c5bfe555f58a1353066165e34c1207c493434136f5636bd4f2593516 Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.752355 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdmbc\" (UniqueName: \"kubernetes.io/projected/38eaf493-09d1-441e-81a9-777174f24006-kube-api-access-hdmbc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7lmwf\" (UID: \"38eaf493-09d1-441e-81a9-777174f24006\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf" Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.800000 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.867960 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.884392 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.939350 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.951631 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" event={"ID":"d20c6701-017d-4f33-91f0-10199890032f","Type":"ContainerStarted","Data":"e6982d546d668d63c99ccadbf53c5e72a9edd6115b0883826687a681c8b917e0"} Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.953651 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" event={"ID":"15504c82-ed79-4ab3-a157-7493e0b13058","Type":"ContainerStarted","Data":"413356ed450f5a48741c731292a88d062ac6770af68366385d9908e348f1b60b"} Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.976233 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.982879 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt"] Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.990111 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" event={"ID":"95a50933-f183-45d5-b8e2-aac85155551e","Type":"ContainerStarted","Data":"0914ca21863b06c2852cbfd2c38177b2f4726684ecc78b246af96d1680c5f88b"} Dec 02 14:01:07 crc kubenswrapper[4625]: I1202 14:01:07.993415 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" event={"ID":"a1bf70dd-f5d1-45a9-94a9-86fffb0758b2","Type":"ContainerStarted","Data":"c58fa887ab7f3b4b3716964cf80c800440a514ad63de84301a453838a1bf0dcc"} Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:07.998377 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4"] Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.001451 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht"] Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.003922 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.004040 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:08 crc kubenswrapper[4625]: E1202 14:01:08.004264 4625 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:01:08 crc kubenswrapper[4625]: E1202 14:01:08.004350 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs podName:e3cfbc8e-665f-4007-a38d-714f53c48923 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:09.004333175 +0000 UTC m=+1024.966510250 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-xpfd6" (UID: "e3cfbc8e-665f-4007-a38d-714f53c48923") : secret "webhook-server-cert" not found Dec 02 14:01:08 crc kubenswrapper[4625]: E1202 14:01:08.004598 4625 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:01:08 crc kubenswrapper[4625]: E1202 14:01:08.004669 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs podName:e3cfbc8e-665f-4007-a38d-714f53c48923 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:09.004646825 +0000 UTC m=+1024.966823970 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-xpfd6" (UID: "e3cfbc8e-665f-4007-a38d-714f53c48923") : secret "metrics-server-cert" not found Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.006655 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf" Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.007038 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" event={"ID":"a686420b-bad9-418e-b729-96680afd0f07","Type":"ContainerStarted","Data":"0e393a11982aea225a37c83e7154221fc4de98a0afc8a999adc008141becf214"} Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.029455 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" event={"ID":"95cd9233-3d9c-45e1-ade0-6753a952b721","Type":"ContainerStarted","Data":"2db20fdf5a0e1da35e95795ea5408643d0dc75b3a1f63a4398b976c3cfec3fda"} Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.041249 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" event={"ID":"137e6ec9-76ad-4b65-a788-a8a38f84343f","Type":"ContainerStarted","Data":"92a8a77ed6fe7308663ff156d88283d9dc42d4b66455d08087d00313cc09c44e"} Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.405442 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg"] Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.476729 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b"] Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.519087 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:08 crc kubenswrapper[4625]: E1202 14:01:08.519532 4625 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:01:08 crc kubenswrapper[4625]: E1202 14:01:08.519599 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert podName:a9612490-cbef-4040-a5f5-26737160de83 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:10.519579942 +0000 UTC m=+1026.481757017 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" (UID: "a9612490-cbef-4040-a5f5-26737160de83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.560178 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx"] Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.614391 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5"] Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.637455 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-lg24z"] Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.738760 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp"] Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.825520 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7"] Dec 02 14:01:08 crc kubenswrapper[4625]: I1202 14:01:08.841659 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh"] Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.054395 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.054822 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:09 crc kubenswrapper[4625]: E1202 14:01:09.054688 4625 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:01:09 crc kubenswrapper[4625]: E1202 14:01:09.055060 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs podName:e3cfbc8e-665f-4007-a38d-714f53c48923 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:11.055041505 +0000 UTC m=+1027.017218580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-xpfd6" (UID: "e3cfbc8e-665f-4007-a38d-714f53c48923") : secret "metrics-server-cert" not found Dec 02 14:01:09 crc kubenswrapper[4625]: E1202 14:01:09.054998 4625 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:01:09 crc kubenswrapper[4625]: E1202 14:01:09.055461 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs podName:e3cfbc8e-665f-4007-a38d-714f53c48923 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:11.055452956 +0000 UTC m=+1027.017630031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-xpfd6" (UID: "e3cfbc8e-665f-4007-a38d-714f53c48923") : secret "webhook-server-cert" not found Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260336 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" event={"ID":"0daed4ec-6cef-4f70-bdf2-27c278868917","Type":"ContainerStarted","Data":"847b572cf52d05a4601e0fd93c6747c36435e9b6c5f15e00f3cd699de2e25bf9"} Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260419 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf"] Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260444 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" event={"ID":"79d42122-959c-41e1-9c56-58788fd56100","Type":"ContainerStarted","Data":"1af63e48c36955067578cc75eb2f281ac12135e6c90d278ceeb479689d34cb2f"} Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260458 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" event={"ID":"cc5f44ae-eba1-40ca-8391-49985c6211bd","Type":"ContainerStarted","Data":"e734679c326c0357fc571f886cc86a334e0a0f00ad5de0dd03b906191ebac37f"} Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260472 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" event={"ID":"5e58537e-7499-41c1-b154-ff06bd4dd58a","Type":"ContainerStarted","Data":"9909c389be84f9839b712a399fec6918e97be991ac36d32cc354d3d9c16f4357"} Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260485 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" event={"ID":"db262c5c-48c7-4749-990c-77993791ba47","Type":"ContainerStarted","Data":"9178c694be9e739f86e512a8ab8f527648657ecae5f927feb9189b98436b5b7d"} Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260497 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" event={"ID":"0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e","Type":"ContainerStarted","Data":"b799e990c5bfe555f58a1353066165e34c1207c493434136f5636bd4f2593516"} Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260509 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" event={"ID":"910705f2-ee02-421a-a0eb-eb594d119f9e","Type":"ContainerStarted","Data":"9d420389335ed67d02d74d2c744dcec909602e2f610e99755421c0cb22fb49d7"} Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260543 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" event={"ID":"d04e4d3e-b826-40ad-9955-7c7ba1379920","Type":"ContainerStarted","Data":"bbd2d3ee872478503ff8a64bf43c2c06d89a0f59dd28f3def648359b2104de96"} Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260556 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" event={"ID":"37a0be8e-736e-486e-a1af-abc65c34c25b","Type":"ContainerStarted","Data":"2b53e7738a1c8103be7a07babd43aa6ff72dd2f5eaf1665c685e126a6967e0c2"} Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.260570 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" event={"ID":"0f5a3014-4394-4a6f-972e-52f2ef19328f","Type":"ContainerStarted","Data":"efcd4cddb29614d21ec5f64cde04983ae5d0a5af5eeae668729fb249180dec34"} Dec 02 14:01:09 crc kubenswrapper[4625]: W1202 14:01:09.266166 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dff1b74_d58f_40b9_a3a6_c1ebdd498690.slice/crio-9e06f0411aef785a4f717cfd592d9811b507d1b63027fa92b77e70310b16a239 WatchSource:0}: Error finding container 9e06f0411aef785a4f717cfd592d9811b507d1b63027fa92b77e70310b16a239: Status 404 returned error can't find the container with id 9e06f0411aef785a4f717cfd592d9811b507d1b63027fa92b77e70310b16a239 Dec 02 14:01:09 crc kubenswrapper[4625]: W1202 14:01:09.268694 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6075f378_b13f_422a_a3c0_3301d78d3fa9.slice/crio-4685ac7dc9ea76093fc84edc468900c94cf21967c0c2a394e88a702a3ed0d5e0 WatchSource:0}: Error finding container 4685ac7dc9ea76093fc84edc468900c94cf21967c0c2a394e88a702a3ed0d5e0: Status 404 returned error can't find the container with id 4685ac7dc9ea76093fc84edc468900c94cf21967c0c2a394e88a702a3ed0d5e0 Dec 02 14:01:09 crc kubenswrapper[4625]: W1202 14:01:09.270062 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38eaf493_09d1_441e_81a9_777174f24006.slice/crio-e5bb327ffab2e6bf8e3d3444072a012befd81c31ac062e4a16cc53c2e3144556 WatchSource:0}: Error finding container e5bb327ffab2e6bf8e3d3444072a012befd81c31ac062e4a16cc53c2e3144556: Status 404 returned error can't find the container with id e5bb327ffab2e6bf8e3d3444072a012befd81c31ac062e4a16cc53c2e3144556 Dec 02 14:01:09 crc kubenswrapper[4625]: I1202 14:01:09.462443 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:09 crc kubenswrapper[4625]: E1202 14:01:09.462687 4625 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:01:09 crc kubenswrapper[4625]: E1202 14:01:09.462740 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert podName:73f97b4a-0c9b-4422-a7fd-e2aab20f9825 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:13.462723616 +0000 UTC m=+1029.424900691 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert") pod "infra-operator-controller-manager-57548d458d-pr84z" (UID: "73f97b4a-0c9b-4422-a7fd-e2aab20f9825") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:01:10 crc kubenswrapper[4625]: I1202 14:01:10.154602 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" event={"ID":"4dff1b74-d58f-40b9-a3a6-c1ebdd498690","Type":"ContainerStarted","Data":"9e06f0411aef785a4f717cfd592d9811b507d1b63027fa92b77e70310b16a239"} Dec 02 14:01:10 crc kubenswrapper[4625]: I1202 14:01:10.174580 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" event={"ID":"6075f378-b13f-422a-a3c0-3301d78d3fa9","Type":"ContainerStarted","Data":"4685ac7dc9ea76093fc84edc468900c94cf21967c0c2a394e88a702a3ed0d5e0"} Dec 02 14:01:10 crc kubenswrapper[4625]: I1202 14:01:10.191930 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf" event={"ID":"38eaf493-09d1-441e-81a9-777174f24006","Type":"ContainerStarted","Data":"e5bb327ffab2e6bf8e3d3444072a012befd81c31ac062e4a16cc53c2e3144556"} Dec 02 14:01:10 crc kubenswrapper[4625]: I1202 14:01:10.589654 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:10 crc kubenswrapper[4625]: E1202 14:01:10.589959 4625 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:01:10 crc kubenswrapper[4625]: E1202 14:01:10.590093 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert podName:a9612490-cbef-4040-a5f5-26737160de83 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:14.5900567 +0000 UTC m=+1030.552233775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" (UID: "a9612490-cbef-4040-a5f5-26737160de83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:01:11 crc kubenswrapper[4625]: I1202 14:01:11.099110 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:11 crc kubenswrapper[4625]: I1202 14:01:11.099208 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:11 crc kubenswrapper[4625]: E1202 14:01:11.099457 4625 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:01:11 crc kubenswrapper[4625]: E1202 14:01:11.099541 4625 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:01:11 crc kubenswrapper[4625]: E1202 14:01:11.099567 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs podName:e3cfbc8e-665f-4007-a38d-714f53c48923 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:15.099543518 +0000 UTC m=+1031.061720653 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-xpfd6" (UID: "e3cfbc8e-665f-4007-a38d-714f53c48923") : secret "webhook-server-cert" not found Dec 02 14:01:11 crc kubenswrapper[4625]: E1202 14:01:11.099682 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs podName:e3cfbc8e-665f-4007-a38d-714f53c48923 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:15.099654371 +0000 UTC m=+1031.061831446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-xpfd6" (UID: "e3cfbc8e-665f-4007-a38d-714f53c48923") : secret "metrics-server-cert" not found Dec 02 14:01:13 crc kubenswrapper[4625]: I1202 14:01:13.519890 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:13 crc kubenswrapper[4625]: E1202 14:01:13.520061 4625 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 02 14:01:13 crc kubenswrapper[4625]: E1202 14:01:13.520119 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert podName:73f97b4a-0c9b-4422-a7fd-e2aab20f9825 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:21.520102215 +0000 UTC m=+1037.482279290 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert") pod "infra-operator-controller-manager-57548d458d-pr84z" (UID: "73f97b4a-0c9b-4422-a7fd-e2aab20f9825") : secret "infra-operator-webhook-server-cert" not found Dec 02 14:01:14 crc kubenswrapper[4625]: I1202 14:01:14.675574 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:14 crc kubenswrapper[4625]: E1202 14:01:14.675946 4625 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:01:14 crc kubenswrapper[4625]: E1202 14:01:14.676076 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert podName:a9612490-cbef-4040-a5f5-26737160de83 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:22.676043844 +0000 UTC m=+1038.638220919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" (UID: "a9612490-cbef-4040-a5f5-26737160de83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 02 14:01:15 crc kubenswrapper[4625]: I1202 14:01:15.187832 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:15 crc kubenswrapper[4625]: I1202 14:01:15.188378 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:15 crc kubenswrapper[4625]: E1202 14:01:15.188629 4625 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 02 14:01:15 crc kubenswrapper[4625]: E1202 14:01:15.188720 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs podName:e3cfbc8e-665f-4007-a38d-714f53c48923 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:23.18869534 +0000 UTC m=+1039.150872415 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs") pod "openstack-operator-controller-manager-58cd586464-xpfd6" (UID: "e3cfbc8e-665f-4007-a38d-714f53c48923") : secret "metrics-server-cert" not found Dec 02 14:01:15 crc kubenswrapper[4625]: E1202 14:01:15.189025 4625 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 02 14:01:15 crc kubenswrapper[4625]: E1202 14:01:15.189113 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs podName:e3cfbc8e-665f-4007-a38d-714f53c48923 nodeName:}" failed. No retries permitted until 2025-12-02 14:01:23.189087801 +0000 UTC m=+1039.151264876 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs") pod "openstack-operator-controller-manager-58cd586464-xpfd6" (UID: "e3cfbc8e-665f-4007-a38d-714f53c48923") : secret "webhook-server-cert" not found Dec 02 14:01:19 crc kubenswrapper[4625]: I1202 14:01:19.271744 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:01:19 crc kubenswrapper[4625]: I1202 14:01:19.272270 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:01:19 crc kubenswrapper[4625]: I1202 14:01:19.272357 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:01:19 crc kubenswrapper[4625]: I1202 14:01:19.273211 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26c37d19f3fe7a2800125178b96518c47f7905764a81c00a7c86f8da62aaaa2f"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:01:19 crc kubenswrapper[4625]: I1202 14:01:19.273275 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://26c37d19f3fe7a2800125178b96518c47f7905764a81c00a7c86f8da62aaaa2f" gracePeriod=600 Dec 02 14:01:20 crc kubenswrapper[4625]: I1202 14:01:20.378975 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="26c37d19f3fe7a2800125178b96518c47f7905764a81c00a7c86f8da62aaaa2f" exitCode=0 Dec 02 14:01:20 crc kubenswrapper[4625]: I1202 14:01:20.379020 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"26c37d19f3fe7a2800125178b96518c47f7905764a81c00a7c86f8da62aaaa2f"} Dec 02 14:01:20 crc kubenswrapper[4625]: I1202 14:01:20.379565 4625 scope.go:117] "RemoveContainer" containerID="c1d575805cab2283b92f1a4e7b510b132f2ba9784cf488248063f8b6d7df5e2f" Dec 02 14:01:21 crc kubenswrapper[4625]: I1202 14:01:21.559877 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:21 crc kubenswrapper[4625]: I1202 14:01:21.571114 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73f97b4a-0c9b-4422-a7fd-e2aab20f9825-cert\") pod \"infra-operator-controller-manager-57548d458d-pr84z\" (UID: \"73f97b4a-0c9b-4422-a7fd-e2aab20f9825\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:21 crc kubenswrapper[4625]: I1202 14:01:21.615258 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:01:22 crc kubenswrapper[4625]: I1202 14:01:22.677478 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:22 crc kubenswrapper[4625]: I1202 14:01:22.682844 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9612490-cbef-4040-a5f5-26737160de83-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck\" (UID: \"a9612490-cbef-4040-a5f5-26737160de83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:22 crc kubenswrapper[4625]: I1202 14:01:22.721049 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:01:23 crc kubenswrapper[4625]: I1202 14:01:23.288342 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:23 crc kubenswrapper[4625]: I1202 14:01:23.288415 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:23 crc kubenswrapper[4625]: I1202 14:01:23.295197 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-webhook-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:23 crc kubenswrapper[4625]: I1202 14:01:23.295204 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3cfbc8e-665f-4007-a38d-714f53c48923-metrics-certs\") pod \"openstack-operator-controller-manager-58cd586464-xpfd6\" (UID: \"e3cfbc8e-665f-4007-a38d-714f53c48923\") " pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:23 crc kubenswrapper[4625]: I1202 14:01:23.474348 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:23 crc kubenswrapper[4625]: E1202 14:01:23.561761 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 02 14:01:23 crc kubenswrapper[4625]: E1202 14:01:23.562031 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7wvfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-jktk5_openstack-operators(0f5a3014-4394-4a6f-972e-52f2ef19328f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:29 crc kubenswrapper[4625]: E1202 14:01:29.273388 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 02 14:01:29 crc kubenswrapper[4625]: E1202 14:01:29.274963 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-87gpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-q854l_openstack-operators(15504c82-ed79-4ab3-a157-7493e0b13058): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:29 crc kubenswrapper[4625]: E1202 14:01:29.450292 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.193:5001/openstack-k8s-operators/cinder-operator:541711b0ad64e626b427e8f12797a49eef78555c" Dec 02 14:01:29 crc kubenswrapper[4625]: E1202 14:01:29.450624 4625 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.193:5001/openstack-k8s-operators/cinder-operator:541711b0ad64e626b427e8f12797a49eef78555c" Dec 02 14:01:29 crc kubenswrapper[4625]: E1202 14:01:29.450821 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.193:5001/openstack-k8s-operators/cinder-operator:541711b0ad64e626b427e8f12797a49eef78555c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6bl8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-55f4dbb9b7-bhlt2_openstack-operators(95cd9233-3d9c-45e1-ade0-6753a952b721): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:30 crc kubenswrapper[4625]: E1202 14:01:30.183640 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 02 14:01:30 crc kubenswrapper[4625]: E1202 14:01:30.183992 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzzbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-vck47_openstack-operators(a1bf70dd-f5d1-45a9-94a9-86fffb0758b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:32 crc kubenswrapper[4625]: E1202 14:01:32.579432 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 02 14:01:32 crc kubenswrapper[4625]: E1202 14:01:32.579702 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhcj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-jhwcz_openstack-operators(137e6ec9-76ad-4b65-a788-a8a38f84343f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:33 crc kubenswrapper[4625]: E1202 14:01:33.091231 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" Dec 02 14:01:33 crc kubenswrapper[4625]: E1202 14:01:33.091592 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqn7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-lwmht_openstack-operators(5e58537e-7499-41c1-b154-ff06bd4dd58a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:33 crc kubenswrapper[4625]: E1202 14:01:33.801662 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Dec 02 14:01:33 crc kubenswrapper[4625]: E1202 14:01:33.802201 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-phd45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-qtsvv_openstack-operators(a686420b-bad9-418e-b729-96680afd0f07): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:36 crc kubenswrapper[4625]: E1202 14:01:36.262499 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" Dec 02 14:01:36 crc kubenswrapper[4625]: E1202 14:01:36.262937 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-csb9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-wpdzp_openstack-operators(db262c5c-48c7-4749-990c-77993791ba47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:36 crc kubenswrapper[4625]: E1202 14:01:36.904773 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 02 14:01:36 crc kubenswrapper[4625]: E1202 14:01:36.905292 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x6cn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-wqq4b_openstack-operators(37a0be8e-736e-486e-a1af-abc65c34c25b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:37 crc kubenswrapper[4625]: E1202 14:01:37.629828 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Dec 02 14:01:37 crc kubenswrapper[4625]: E1202 14:01:37.630129 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5fvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-mmjnh_openstack-operators(6075f378-b13f-422a-a3c0-3301d78d3fa9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:39 crc kubenswrapper[4625]: E1202 14:01:39.722426 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 02 14:01:39 crc kubenswrapper[4625]: E1202 14:01:39.724233 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrrm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-jwks4_openstack-operators(910705f2-ee02-421a-a0eb-eb594d119f9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:40 crc kubenswrapper[4625]: E1202 14:01:40.418693 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 02 14:01:40 crc kubenswrapper[4625]: E1202 14:01:40.419506 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q79r5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-lg24z_openstack-operators(d04e4d3e-b826-40ad-9955-7c7ba1379920): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:42 crc kubenswrapper[4625]: E1202 14:01:42.442052 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" Dec 02 14:01:42 crc kubenswrapper[4625]: E1202 14:01:42.442802 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25t8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-ls4vx_openstack-operators(0daed4ec-6cef-4f70-bdf2-27c278868917): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:43 crc kubenswrapper[4625]: E1202 14:01:43.432342 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 02 14:01:43 crc kubenswrapper[4625]: E1202 14:01:43.432666 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qbkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-qgjn7_openstack-operators(4dff1b74-d58f-40b9-a3a6-c1ebdd498690): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:45 crc kubenswrapper[4625]: E1202 14:01:45.340813 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 02 14:01:45 crc kubenswrapper[4625]: E1202 14:01:45.342016 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwnlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-k4nlb_openstack-operators(0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:47 crc kubenswrapper[4625]: E1202 14:01:47.444989 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 02 14:01:47 crc kubenswrapper[4625]: E1202 14:01:47.445277 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gfwrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-zk4xg_openstack-operators(cc5f44ae-eba1-40ca-8391-49985c6211bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:47 crc kubenswrapper[4625]: I1202 14:01:47.453810 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:01:48 crc kubenswrapper[4625]: I1202 14:01:48.312842 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6"] Dec 02 14:01:48 crc kubenswrapper[4625]: E1202 14:01:48.847463 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 02 14:01:48 crc kubenswrapper[4625]: E1202 14:01:48.848460 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hdmbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7lmwf_openstack-operators(38eaf493-09d1-441e-81a9-777174f24006): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:48 crc kubenswrapper[4625]: E1202 14:01:48.849759 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf" podUID="38eaf493-09d1-441e-81a9-777174f24006" Dec 02 14:01:49 crc kubenswrapper[4625]: I1202 14:01:49.475534 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-pr84z"] Dec 02 14:01:49 crc kubenswrapper[4625]: I1202 14:01:49.484390 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck"] Dec 02 14:01:49 crc kubenswrapper[4625]: I1202 14:01:49.710045 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" event={"ID":"e3cfbc8e-665f-4007-a38d-714f53c48923","Type":"ContainerStarted","Data":"497d5df7bd88c33ec99217c45c496e18cb5c540599c4d3fe06c6aed7bb946790"} Dec 02 14:01:49 crc kubenswrapper[4625]: I1202 14:01:49.715049 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"5271eaf0b8b85861d7c190af249c8999cbc2c292aa3724e0a85121cbb59f2516"} Dec 02 14:01:49 crc kubenswrapper[4625]: E1202 14:01:49.716565 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf" podUID="38eaf493-09d1-441e-81a9-777174f24006" Dec 02 14:01:50 crc kubenswrapper[4625]: W1202 14:01:50.155145 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73f97b4a_0c9b_4422_a7fd_e2aab20f9825.slice/crio-2c9085ced4dc6242352912b3775cba602461aba5fae99e638d74e4fec3d2a819 WatchSource:0}: Error finding container 2c9085ced4dc6242352912b3775cba602461aba5fae99e638d74e4fec3d2a819: Status 404 returned error can't find the container with id 2c9085ced4dc6242352912b3775cba602461aba5fae99e638d74e4fec3d2a819 Dec 02 14:01:50 crc kubenswrapper[4625]: W1202 14:01:50.161441 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9612490_cbef_4040_a5f5_26737160de83.slice/crio-62f1ba349a09d543f1da9c4fe0f4055578bcb68301b764f5c54dcad58928ffff WatchSource:0}: Error finding container 62f1ba349a09d543f1da9c4fe0f4055578bcb68301b764f5c54dcad58928ffff: Status 404 returned error can't find the container with id 62f1ba349a09d543f1da9c4fe0f4055578bcb68301b764f5c54dcad58928ffff Dec 02 14:01:50 crc kubenswrapper[4625]: I1202 14:01:50.739156 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" event={"ID":"79d42122-959c-41e1-9c56-58788fd56100","Type":"ContainerStarted","Data":"b4697cb3fbdf9de36758fcd5258e65cae52967a7d105a344e75c3121e2061006"} Dec 02 14:01:50 crc kubenswrapper[4625]: I1202 14:01:50.744437 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" event={"ID":"a9612490-cbef-4040-a5f5-26737160de83","Type":"ContainerStarted","Data":"62f1ba349a09d543f1da9c4fe0f4055578bcb68301b764f5c54dcad58928ffff"} Dec 02 14:01:50 crc kubenswrapper[4625]: I1202 14:01:50.745872 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" event={"ID":"73f97b4a-0c9b-4422-a7fd-e2aab20f9825","Type":"ContainerStarted","Data":"2c9085ced4dc6242352912b3775cba602461aba5fae99e638d74e4fec3d2a819"} Dec 02 14:01:50 crc kubenswrapper[4625]: I1202 14:01:50.748579 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" event={"ID":"95a50933-f183-45d5-b8e2-aac85155551e","Type":"ContainerStarted","Data":"b86010c84bb293bd66da2469d913dea0dd840d98960130de2a42d04250b43fd0"} Dec 02 14:01:51 crc kubenswrapper[4625]: I1202 14:01:51.771364 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" event={"ID":"d20c6701-017d-4f33-91f0-10199890032f","Type":"ContainerStarted","Data":"8ab57b9a0817722765477a2f2815157dd948a432985e4c69596e19f1cf3825ba"} Dec 02 14:01:51 crc kubenswrapper[4625]: I1202 14:01:51.792656 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" event={"ID":"e3cfbc8e-665f-4007-a38d-714f53c48923","Type":"ContainerStarted","Data":"08c34897145391c80f04209224b3fd711f7f81b51cdb6729b850cb0cadcf9ba7"} Dec 02 14:01:51 crc kubenswrapper[4625]: I1202 14:01:51.793966 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:01:51 crc kubenswrapper[4625]: I1202 14:01:51.851224 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" podStartSLOduration=45.851196181 podStartE2EDuration="45.851196181s" podCreationTimestamp="2025-12-02 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:01:51.845911589 +0000 UTC m=+1067.808088664" watchObservedRunningTime="2025-12-02 14:01:51.851196181 +0000 UTC m=+1067.813373256" Dec 02 14:01:55 crc kubenswrapper[4625]: E1202 14:01:55.862015 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" podUID="137e6ec9-76ad-4b65-a788-a8a38f84343f" Dec 02 14:01:55 crc kubenswrapper[4625]: E1202 14:01:55.933201 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" podUID="a1bf70dd-f5d1-45a9-94a9-86fffb0758b2" Dec 02 14:01:55 crc kubenswrapper[4625]: E1202 14:01:55.952295 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" podUID="cc5f44ae-eba1-40ca-8391-49985c6211bd" Dec 02 14:01:55 crc kubenswrapper[4625]: E1202 14:01:55.966862 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" podUID="4dff1b74-d58f-40b9-a3a6-c1ebdd498690" Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.063713 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" podUID="910705f2-ee02-421a-a0eb-eb594d119f9e" Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.067530 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" podUID="37a0be8e-736e-486e-a1af-abc65c34c25b" Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.307181 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" podUID="15504c82-ed79-4ab3-a157-7493e0b13058" Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.393396 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" podUID="6075f378-b13f-422a-a3c0-3301d78d3fa9" Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.422090 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" event={"ID":"137e6ec9-76ad-4b65-a788-a8a38f84343f","Type":"ContainerStarted","Data":"24629379b4e5d9ad8d53b5a84e86340ffaa40baf76078cea367dea4237460ff3"} Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.447670 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" event={"ID":"d20c6701-017d-4f33-91f0-10199890032f","Type":"ContainerStarted","Data":"55f2beb0ff3d39707bb5dbe27ac30ab2b7220ab3ef57a9cfcedba03804170af5"} Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.449642 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.458221 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.460798 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" event={"ID":"15504c82-ed79-4ab3-a157-7493e0b13058","Type":"ContainerStarted","Data":"46402820c7f3b92f5171ee61a4e573fd8964c12cb3d930898f051ac4efc6f5be"} Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.482209 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" event={"ID":"6075f378-b13f-422a-a3c0-3301d78d3fa9","Type":"ContainerStarted","Data":"88df373aee38aad5cf658778080b93d4b7682206d2b1d9e8808c481e6236862c"} Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.500386 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" event={"ID":"910705f2-ee02-421a-a0eb-eb594d119f9e","Type":"ContainerStarted","Data":"b097f5945b0aad10943adfc5a240a0e997fd883ae341165654e703ae4a2a98a3"} Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.522763 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" event={"ID":"79d42122-959c-41e1-9c56-58788fd56100","Type":"ContainerStarted","Data":"6abd9959d8182e81e9cecd5accd15324ef92a489f669788790eab8ec80bbda4c"} Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.522813 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.538492 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-kl65q" podStartSLOduration=3.505478677 podStartE2EDuration="51.538463074s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:07.654490673 +0000 UTC m=+1023.616667748" lastFinishedPulling="2025-12-02 14:01:55.68747506 +0000 UTC m=+1071.649652145" observedRunningTime="2025-12-02 14:01:56.524021815 +0000 UTC m=+1072.486198900" watchObservedRunningTime="2025-12-02 14:01:56.538463074 +0000 UTC m=+1072.500640159" Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.539542 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" event={"ID":"a1bf70dd-f5d1-45a9-94a9-86fffb0758b2","Type":"ContainerStarted","Data":"e2145d005cafb6d83df08deb6a0be1fa4ef5cf16b074f0e0b5c6f8cc2622e25d"} Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.546450 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.546673 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7wvfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-jktk5_openstack-operators(0f5a3014-4394-4a6f-972e-52f2ef19328f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.546992 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.547839 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" podUID="0f5a3014-4394-4a6f-972e-52f2ef19328f" Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.560784 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" event={"ID":"cc5f44ae-eba1-40ca-8391-49985c6211bd","Type":"ContainerStarted","Data":"181e4e3eb3a46cd52e0ec46e810cd2fdfb66ebaf03ccf462bf7cd4d93ddf9a08"} Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.585330 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" event={"ID":"37a0be8e-736e-486e-a1af-abc65c34c25b","Type":"ContainerStarted","Data":"3da3b3ed27b7e3ac31d7b4f719f265a85b092674cd116a14c0b793333ee498b4"} Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.615774 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" podUID="cc5f44ae-eba1-40ca-8391-49985c6211bd" Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.627630 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" event={"ID":"4dff1b74-d58f-40b9-a3a6-c1ebdd498690","Type":"ContainerStarted","Data":"d40504b5f2e15db60cc9c1ee30feabd4fd2c3fd7b12c6635c495f119dd57a425"} Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.655228 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" podUID="95cd9233-3d9c-45e1-ade0-6753a952b721" Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.767387 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" podUID="5e58537e-7499-41c1-b154-ff06bd4dd58a" Dec 02 14:01:56 crc kubenswrapper[4625]: I1202 14:01:56.808021 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-gr5bt" podStartSLOduration=4.206468971 podStartE2EDuration="51.807987894s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:07.922334963 +0000 UTC m=+1023.884512038" lastFinishedPulling="2025-12-02 14:01:55.523853886 +0000 UTC m=+1071.486030961" observedRunningTime="2025-12-02 14:01:56.789045983 +0000 UTC m=+1072.751223058" watchObservedRunningTime="2025-12-02 14:01:56.807987894 +0000 UTC m=+1072.770164969" Dec 02 14:01:56 crc kubenswrapper[4625]: E1202 14:01:56.942696 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" podUID="a686420b-bad9-418e-b729-96680afd0f07" Dec 02 14:01:57 crc kubenswrapper[4625]: E1202 14:01:57.035293 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" podUID="0daed4ec-6cef-4f70-bdf2-27c278868917" Dec 02 14:01:57 crc kubenswrapper[4625]: I1202 14:01:57.638107 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" event={"ID":"0daed4ec-6cef-4f70-bdf2-27c278868917","Type":"ContainerStarted","Data":"43a60828a691d847681a779cf11ae9a45dbab0ceb7264ce420dd55ee6ad00a34"} Dec 02 14:01:57 crc kubenswrapper[4625]: I1202 14:01:57.640938 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" event={"ID":"95cd9233-3d9c-45e1-ade0-6753a952b721","Type":"ContainerStarted","Data":"97324e2b9952dc10a670676a04fb11d0bbe4d3b90d163d4067267b10cbd043f1"} Dec 02 14:01:57 crc kubenswrapper[4625]: I1202 14:01:57.667740 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" event={"ID":"5e58537e-7499-41c1-b154-ff06bd4dd58a","Type":"ContainerStarted","Data":"10d51865f803eb0a43743367249d2076e998268b805fb0797047152288a548b5"} Dec 02 14:01:57 crc kubenswrapper[4625]: I1202 14:01:57.683431 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" event={"ID":"a686420b-bad9-418e-b729-96680afd0f07","Type":"ContainerStarted","Data":"ff818a2dfbf347b89b8b5845e1394f2344fab142c9f0e23f64faaa8de803cd48"} Dec 02 14:01:59 crc kubenswrapper[4625]: E1202 14:01:59.288685 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" podUID="db262c5c-48c7-4749-990c-77993791ba47" Dec 02 14:01:59 crc kubenswrapper[4625]: I1202 14:01:59.704002 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" event={"ID":"95a50933-f183-45d5-b8e2-aac85155551e","Type":"ContainerStarted","Data":"5fdf10694b410d30a04f41120f48b1732663174b822b2c91106252c5a8666bf3"} Dec 02 14:01:59 crc kubenswrapper[4625]: I1202 14:01:59.704490 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" Dec 02 14:01:59 crc kubenswrapper[4625]: I1202 14:01:59.707514 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" event={"ID":"db262c5c-48c7-4749-990c-77993791ba47","Type":"ContainerStarted","Data":"c3720b6e792c1cf384eeaa758ab004cb6b1428bbdc46e3105fa117aab9063b55"} Dec 02 14:01:59 crc kubenswrapper[4625]: I1202 14:01:59.708050 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" Dec 02 14:01:59 crc kubenswrapper[4625]: I1202 14:01:59.734956 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-b882s" podStartSLOduration=5.358365007 podStartE2EDuration="54.734929685s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:07.654049881 +0000 UTC m=+1023.616226956" lastFinishedPulling="2025-12-02 14:01:57.030614559 +0000 UTC m=+1072.992791634" observedRunningTime="2025-12-02 14:01:59.734132803 +0000 UTC m=+1075.696309888" watchObservedRunningTime="2025-12-02 14:01:59.734929685 +0000 UTC m=+1075.697106760" Dec 02 14:02:01 crc kubenswrapper[4625]: E1202 14:02:01.580984 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" podUID="d04e4d3e-b826-40ad-9955-7c7ba1379920" Dec 02 14:02:01 crc kubenswrapper[4625]: I1202 14:02:01.784701 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" event={"ID":"6075f378-b13f-422a-a3c0-3301d78d3fa9","Type":"ContainerStarted","Data":"b14421388769a5525d0c64dd56d90877f1cca3d8a74efecb117ffdd87fee33db"} Dec 02 14:02:01 crc kubenswrapper[4625]: I1202 14:02:01.805064 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" event={"ID":"d04e4d3e-b826-40ad-9955-7c7ba1379920","Type":"ContainerStarted","Data":"1369fd8265bb56745b0eff8ff6cab5699c63fbae02b98c206f6e0db1a27a7f3e"} Dec 02 14:02:02 crc kubenswrapper[4625]: E1202 14:02:02.356864 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" podUID="0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.819161 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" event={"ID":"910705f2-ee02-421a-a0eb-eb594d119f9e","Type":"ContainerStarted","Data":"3972b8133ab1d54a9df098941e157fb99540e45bc88db69a1f7174e3a76c64ee"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.819689 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.822494 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" event={"ID":"cc5f44ae-eba1-40ca-8391-49985c6211bd","Type":"ContainerStarted","Data":"612e75f2f3308dbf6b62bb894564bdda01eb3db42c69d979e270d26d0c0dc32d"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.823162 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.825383 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" event={"ID":"a9612490-cbef-4040-a5f5-26737160de83","Type":"ContainerStarted","Data":"ee65edf2e777f5fbcad087256eca4a44c5decb478491a89b9c1f6fed284ba174"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.825411 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" event={"ID":"a9612490-cbef-4040-a5f5-26737160de83","Type":"ContainerStarted","Data":"092023ae438641867d6e102c9b57e2d946edbe27d24ccfbbc4674a0f0bbb78d8"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.825830 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.827905 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" event={"ID":"0daed4ec-6cef-4f70-bdf2-27c278868917","Type":"ContainerStarted","Data":"c804516bf74cef6b4ea2e6a209bb6a207f448bbfc91beafb5380ac10a1b85d6f"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.828336 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.830225 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" event={"ID":"0f5a3014-4394-4a6f-972e-52f2ef19328f","Type":"ContainerStarted","Data":"f5435e1ee50fafdfaed743b71e5b1ad48358ba26fedac48bffac3dfc9c06d145"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.830252 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" event={"ID":"0f5a3014-4394-4a6f-972e-52f2ef19328f","Type":"ContainerStarted","Data":"939022fa08682a050c6a2445afffbcdbd598a2295ce494d89e0551078a54c177"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.831790 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" event={"ID":"0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e","Type":"ContainerStarted","Data":"b974b9a86105198ab78ab2baeeebb2fb5bc7ab74879bd744d8391bb7c139f3d5"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.836016 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" event={"ID":"4dff1b74-d58f-40b9-a3a6-c1ebdd498690","Type":"ContainerStarted","Data":"08435941ec300b771f6dfbb005656a93d3e6a1dedec7d17373dd554db9461380"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.836502 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.842236 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" event={"ID":"15504c82-ed79-4ab3-a157-7493e0b13058","Type":"ContainerStarted","Data":"4b7b85dbfb52a5f7ba20b8b0fe6a9a1c852df0070527a57be1344cee17e502ac"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.842857 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.847343 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" event={"ID":"95cd9233-3d9c-45e1-ade0-6753a952b721","Type":"ContainerStarted","Data":"871d67eed91e3fa1fb08e9c2c693f6b91470caa878f1c1609cf1c8ed3bae434d"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.847618 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.849767 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" event={"ID":"137e6ec9-76ad-4b65-a788-a8a38f84343f","Type":"ContainerStarted","Data":"4b14746a352ecae84942d698fd47c523624f39fe8a0f2493ee8a78ae034f4c70"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.849897 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.851618 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" event={"ID":"73f97b4a-0c9b-4422-a7fd-e2aab20f9825","Type":"ContainerStarted","Data":"99943e9238209dd448b0cde14e052a945544dcfefdd4cba2a713350769b39da1"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.851656 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" event={"ID":"73f97b4a-0c9b-4422-a7fd-e2aab20f9825","Type":"ContainerStarted","Data":"7e233d647d613278b902a3876418d4c7b3e493d2c5f8d3377642ed7af8cdb501"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.852528 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.854856 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" event={"ID":"db262c5c-48c7-4749-990c-77993791ba47","Type":"ContainerStarted","Data":"29e2935c9bfac15c7b33caaa73544b95027a01b4d46d99f6652fa13e6fed5311"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.855298 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.870298 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" event={"ID":"37a0be8e-736e-486e-a1af-abc65c34c25b","Type":"ContainerStarted","Data":"e48ef5849d354695d48cc340eaf7f5d82e1a818f21eede1341fdcddbc18c8a77"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.870369 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.874197 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" event={"ID":"a686420b-bad9-418e-b729-96680afd0f07","Type":"ContainerStarted","Data":"8f86a4e15a38edd9786c38c7b110f535bd2a5d27f5dc4598b24b237dc1308e2e"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.874812 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.877520 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" event={"ID":"a1bf70dd-f5d1-45a9-94a9-86fffb0758b2","Type":"ContainerStarted","Data":"58e0f4922c9ebbcbb59f7a3eb0e252eb45219988833f292877c1f87d7d1bf025"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.878001 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.884723 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" event={"ID":"5e58537e-7499-41c1-b154-ff06bd4dd58a","Type":"ContainerStarted","Data":"eb0796245156c7db831a9a90daeac9a274cdbdeff66b91841f24552648a2401f"} Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.884782 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.884800 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" Dec 02 14:02:02 crc kubenswrapper[4625]: I1202 14:02:02.986223 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" podStartSLOduration=5.057744674 podStartE2EDuration="57.986202053s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:08.145397461 +0000 UTC m=+1024.107574536" lastFinishedPulling="2025-12-02 14:02:01.07385484 +0000 UTC m=+1077.036031915" observedRunningTime="2025-12-02 14:02:02.98387239 +0000 UTC m=+1078.946049465" watchObservedRunningTime="2025-12-02 14:02:02.986202053 +0000 UTC m=+1078.948379128" Dec 02 14:02:03 crc kubenswrapper[4625]: I1202 14:02:03.173135 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" podStartSLOduration=6.237939163 podStartE2EDuration="58.173105224s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:09.270597227 +0000 UTC m=+1025.232774302" lastFinishedPulling="2025-12-02 14:02:01.205763288 +0000 UTC m=+1077.167940363" observedRunningTime="2025-12-02 14:02:03.171012218 +0000 UTC m=+1079.133189303" watchObservedRunningTime="2025-12-02 14:02:03.173105224 +0000 UTC m=+1079.135282299" Dec 02 14:02:03 crc kubenswrapper[4625]: I1202 14:02:03.387821 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" podStartSLOduration=5.583088756 podStartE2EDuration="58.387792486s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:08.444944082 +0000 UTC m=+1024.407121157" lastFinishedPulling="2025-12-02 14:02:01.249647812 +0000 UTC m=+1077.211824887" observedRunningTime="2025-12-02 14:02:03.387090646 +0000 UTC m=+1079.349267751" watchObservedRunningTime="2025-12-02 14:02:03.387792486 +0000 UTC m=+1079.349969561" Dec 02 14:02:03 crc kubenswrapper[4625]: I1202 14:02:03.494576 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" podStartSLOduration=5.602824947 podStartE2EDuration="57.494550745s" podCreationTimestamp="2025-12-02 14:01:06 +0000 UTC" firstStartedPulling="2025-12-02 14:01:09.272007816 +0000 UTC m=+1025.234184891" lastFinishedPulling="2025-12-02 14:02:01.163733614 +0000 UTC m=+1077.125910689" observedRunningTime="2025-12-02 14:02:03.460803485 +0000 UTC m=+1079.422980570" watchObservedRunningTime="2025-12-02 14:02:03.494550745 +0000 UTC m=+1079.456727820" Dec 02 14:02:03 crc kubenswrapper[4625]: I1202 14:02:03.497758 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58cd586464-xpfd6" Dec 02 14:02:03 crc kubenswrapper[4625]: I1202 14:02:03.686591 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" podStartSLOduration=47.925954242 podStartE2EDuration="58.686569514s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:50.174964768 +0000 UTC m=+1066.137141843" lastFinishedPulling="2025-12-02 14:02:00.93558004 +0000 UTC m=+1076.897757115" observedRunningTime="2025-12-02 14:02:03.534356409 +0000 UTC m=+1079.496533484" watchObservedRunningTime="2025-12-02 14:02:03.686569514 +0000 UTC m=+1079.648746589" Dec 02 14:02:03 crc kubenswrapper[4625]: I1202 14:02:03.691368 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" podStartSLOduration=5.187190123 podStartE2EDuration="57.691350854s" podCreationTimestamp="2025-12-02 14:01:06 +0000 UTC" firstStartedPulling="2025-12-02 14:01:08.569655318 +0000 UTC m=+1024.531832393" lastFinishedPulling="2025-12-02 14:02:01.073816049 +0000 UTC m=+1077.035993124" observedRunningTime="2025-12-02 14:02:03.680038509 +0000 UTC m=+1079.642215604" watchObservedRunningTime="2025-12-02 14:02:03.691350854 +0000 UTC m=+1079.653527929" Dec 02 14:02:03 crc kubenswrapper[4625]: I1202 14:02:03.786471 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" podStartSLOduration=48.029885495 podStartE2EDuration="58.786441609s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:50.174941187 +0000 UTC m=+1066.137118262" lastFinishedPulling="2025-12-02 14:02:00.931497291 +0000 UTC m=+1076.893674376" observedRunningTime="2025-12-02 14:02:03.764677661 +0000 UTC m=+1079.726854736" watchObservedRunningTime="2025-12-02 14:02:03.786441609 +0000 UTC m=+1079.748618684" Dec 02 14:02:03 crc kubenswrapper[4625]: I1202 14:02:03.898103 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" podStartSLOduration=5.825276732 podStartE2EDuration="58.89807846s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:08.114276936 +0000 UTC m=+1024.076454011" lastFinishedPulling="2025-12-02 14:02:01.187078664 +0000 UTC m=+1077.149255739" observedRunningTime="2025-12-02 14:02:03.889384695 +0000 UTC m=+1079.851561790" watchObservedRunningTime="2025-12-02 14:02:03.89807846 +0000 UTC m=+1079.860255535" Dec 02 14:02:03 crc kubenswrapper[4625]: I1202 14:02:03.899686 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" podStartSLOduration=5.362415496 podStartE2EDuration="58.899679383s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:07.414851789 +0000 UTC m=+1023.377028874" lastFinishedPulling="2025-12-02 14:02:00.952115686 +0000 UTC m=+1076.914292761" observedRunningTime="2025-12-02 14:02:03.846716395 +0000 UTC m=+1079.808893470" watchObservedRunningTime="2025-12-02 14:02:03.899679383 +0000 UTC m=+1079.861856458" Dec 02 14:02:03 crc kubenswrapper[4625]: I1202 14:02:03.936050 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" podStartSLOduration=5.177633587 podStartE2EDuration="58.936026783s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:07.207202354 +0000 UTC m=+1023.169379429" lastFinishedPulling="2025-12-02 14:02:00.96559555 +0000 UTC m=+1076.927772625" observedRunningTime="2025-12-02 14:02:03.932238342 +0000 UTC m=+1079.894415437" watchObservedRunningTime="2025-12-02 14:02:03.936026783 +0000 UTC m=+1079.898203858" Dec 02 14:02:04 crc kubenswrapper[4625]: I1202 14:02:04.002705 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" podStartSLOduration=4.736297399 podStartE2EDuration="59.002684312s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:06.918497882 +0000 UTC m=+1022.880674957" lastFinishedPulling="2025-12-02 14:02:01.184884795 +0000 UTC m=+1077.147061870" observedRunningTime="2025-12-02 14:02:03.998242972 +0000 UTC m=+1079.960420047" watchObservedRunningTime="2025-12-02 14:02:04.002684312 +0000 UTC m=+1079.964861387" Dec 02 14:02:04 crc kubenswrapper[4625]: I1202 14:02:04.151045 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" podStartSLOduration=7.555079859 podStartE2EDuration="59.151022513s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:07.410991164 +0000 UTC m=+1023.373168239" lastFinishedPulling="2025-12-02 14:01:59.006933818 +0000 UTC m=+1074.969110893" observedRunningTime="2025-12-02 14:02:04.148949897 +0000 UTC m=+1080.111126972" watchObservedRunningTime="2025-12-02 14:02:04.151022513 +0000 UTC m=+1080.113199588" Dec 02 14:02:04 crc kubenswrapper[4625]: I1202 14:02:04.151215 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" podStartSLOduration=5.071598383 podStartE2EDuration="59.151209858s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:07.119018331 +0000 UTC m=+1023.081195406" lastFinishedPulling="2025-12-02 14:02:01.198629806 +0000 UTC m=+1077.160806881" observedRunningTime="2025-12-02 14:02:04.108720492 +0000 UTC m=+1080.070897577" watchObservedRunningTime="2025-12-02 14:02:04.151209858 +0000 UTC m=+1080.113386933" Dec 02 14:02:04 crc kubenswrapper[4625]: I1202 14:02:04.299744 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" podStartSLOduration=5.64892891 podStartE2EDuration="58.299715534s" podCreationTimestamp="2025-12-02 14:01:06 +0000 UTC" firstStartedPulling="2025-12-02 14:01:08.553598797 +0000 UTC m=+1024.515775872" lastFinishedPulling="2025-12-02 14:02:01.204385421 +0000 UTC m=+1077.166562496" observedRunningTime="2025-12-02 14:02:04.26064133 +0000 UTC m=+1080.222818415" watchObservedRunningTime="2025-12-02 14:02:04.299715534 +0000 UTC m=+1080.261892609" Dec 02 14:02:04 crc kubenswrapper[4625]: I1202 14:02:04.302091 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" podStartSLOduration=5.886766358 podStartE2EDuration="58.302083108s" podCreationTimestamp="2025-12-02 14:01:06 +0000 UTC" firstStartedPulling="2025-12-02 14:01:08.81865407 +0000 UTC m=+1024.780831135" lastFinishedPulling="2025-12-02 14:02:01.23397081 +0000 UTC m=+1077.196147885" observedRunningTime="2025-12-02 14:02:04.296015594 +0000 UTC m=+1080.258192669" watchObservedRunningTime="2025-12-02 14:02:04.302083108 +0000 UTC m=+1080.264260183" Dec 02 14:02:04 crc kubenswrapper[4625]: I1202 14:02:04.434399 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" podStartSLOduration=7.075302577 podStartE2EDuration="59.434368687s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:08.62504578 +0000 UTC m=+1024.587222855" lastFinishedPulling="2025-12-02 14:02:00.98411189 +0000 UTC m=+1076.946288965" observedRunningTime="2025-12-02 14:02:04.429778893 +0000 UTC m=+1080.391955968" watchObservedRunningTime="2025-12-02 14:02:04.434368687 +0000 UTC m=+1080.396545762" Dec 02 14:02:04 crc kubenswrapper[4625]: I1202 14:02:04.947242 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf" event={"ID":"38eaf493-09d1-441e-81a9-777174f24006","Type":"ContainerStarted","Data":"6fbc59599f91ad4bc2a63dffc002072899c7ccfd6660ac992a851aab87b15310"} Dec 02 14:02:05 crc kubenswrapper[4625]: I1202 14:02:05.047664 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7lmwf" podStartSLOduration=5.168725812 podStartE2EDuration="59.047624269s" podCreationTimestamp="2025-12-02 14:01:06 +0000 UTC" firstStartedPulling="2025-12-02 14:01:09.279711007 +0000 UTC m=+1025.241888102" lastFinishedPulling="2025-12-02 14:02:03.158609484 +0000 UTC m=+1079.120786559" observedRunningTime="2025-12-02 14:02:04.986167271 +0000 UTC m=+1080.948344366" watchObservedRunningTime="2025-12-02 14:02:05.047624269 +0000 UTC m=+1081.009801364" Dec 02 14:02:05 crc kubenswrapper[4625]: I1202 14:02:05.957799 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" event={"ID":"d04e4d3e-b826-40ad-9955-7c7ba1379920","Type":"ContainerStarted","Data":"2a27593abe288afc6421e4a4be2ce471b719156f6a06a34e22fdc600759ed146"} Dec 02 14:02:05 crc kubenswrapper[4625]: I1202 14:02:05.959045 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" Dec 02 14:02:05 crc kubenswrapper[4625]: I1202 14:02:05.985094 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" podStartSLOduration=4.741792055 podStartE2EDuration="1m0.985063496s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:08.769080457 +0000 UTC m=+1024.731257532" lastFinishedPulling="2025-12-02 14:02:05.012351898 +0000 UTC m=+1080.974528973" observedRunningTime="2025-12-02 14:02:05.97627843 +0000 UTC m=+1081.938455505" watchObservedRunningTime="2025-12-02 14:02:05.985063496 +0000 UTC m=+1081.947240571" Dec 02 14:02:06 crc kubenswrapper[4625]: I1202 14:02:06.105239 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-qtsvv" Dec 02 14:02:06 crc kubenswrapper[4625]: I1202 14:02:06.968346 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" event={"ID":"0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e","Type":"ContainerStarted","Data":"ca976e24631d7388a3f16d05b957f6ea142b66c7bfe413297aa48dbe75cca329"} Dec 02 14:02:06 crc kubenswrapper[4625]: I1202 14:02:06.968469 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" Dec 02 14:02:06 crc kubenswrapper[4625]: I1202 14:02:06.994234 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" podStartSLOduration=4.046425625 podStartE2EDuration="1m1.994211237s" podCreationTimestamp="2025-12-02 14:01:05 +0000 UTC" firstStartedPulling="2025-12-02 14:01:07.804826344 +0000 UTC m=+1023.767003419" lastFinishedPulling="2025-12-02 14:02:05.752611956 +0000 UTC m=+1081.714789031" observedRunningTime="2025-12-02 14:02:06.988690678 +0000 UTC m=+1082.950867763" watchObservedRunningTime="2025-12-02 14:02:06.994211237 +0000 UTC m=+1082.956388312" Dec 02 14:02:07 crc kubenswrapper[4625]: I1202 14:02:07.042793 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" Dec 02 14:02:07 crc kubenswrapper[4625]: I1202 14:02:07.045711 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jktk5" Dec 02 14:02:07 crc kubenswrapper[4625]: I1202 14:02:07.108947 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-qgjn7" Dec 02 14:02:07 crc kubenswrapper[4625]: I1202 14:02:07.252916 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-ls4vx" Dec 02 14:02:07 crc kubenswrapper[4625]: I1202 14:02:07.330278 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-mmjnh" Dec 02 14:02:07 crc kubenswrapper[4625]: I1202 14:02:07.336234 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-wqq4b" Dec 02 14:02:07 crc kubenswrapper[4625]: I1202 14:02:07.559029 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-wpdzp" Dec 02 14:02:11 crc kubenswrapper[4625]: I1202 14:02:11.623107 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-pr84z" Dec 02 14:02:12 crc kubenswrapper[4625]: I1202 14:02:12.728804 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck" Dec 02 14:02:15 crc kubenswrapper[4625]: I1202 14:02:15.562235 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vck47" Dec 02 14:02:15 crc kubenswrapper[4625]: I1202 14:02:15.600492 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55f4dbb9b7-bhlt2" Dec 02 14:02:15 crc kubenswrapper[4625]: I1202 14:02:15.874847 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q854l" Dec 02 14:02:15 crc kubenswrapper[4625]: I1202 14:02:15.972834 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-jhwcz" Dec 02 14:02:16 crc kubenswrapper[4625]: I1202 14:02:16.287004 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-k4nlb" Dec 02 14:02:16 crc kubenswrapper[4625]: I1202 14:02:16.464480 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-lwmht" Dec 02 14:02:16 crc kubenswrapper[4625]: I1202 14:02:16.798956 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-jwks4" Dec 02 14:02:16 crc kubenswrapper[4625]: I1202 14:02:16.890494 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-zk4xg" Dec 02 14:02:17 crc kubenswrapper[4625]: I1202 14:02:17.025969 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lg24z" Dec 02 14:02:30 crc kubenswrapper[4625]: I1202 14:02:30.946122 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-shctx"] Dec 02 14:02:30 crc kubenswrapper[4625]: I1202 14:02:30.948290 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" Dec 02 14:02:30 crc kubenswrapper[4625]: I1202 14:02:30.956217 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 02 14:02:30 crc kubenswrapper[4625]: I1202 14:02:30.956443 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 02 14:02:30 crc kubenswrapper[4625]: I1202 14:02:30.956625 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 02 14:02:30 crc kubenswrapper[4625]: I1202 14:02:30.956829 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lmjgv" Dec 02 14:02:30 crc kubenswrapper[4625]: I1202 14:02:30.981401 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-shctx"] Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.048327 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9dtzn"] Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.049943 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.053333 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef633f35-66ee-4816-931d-110b69a1103c-config\") pod \"dnsmasq-dns-675f4bcbfc-shctx\" (UID: \"ef633f35-66ee-4816-931d-110b69a1103c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.053387 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnqq2\" (UniqueName: \"kubernetes.io/projected/ef633f35-66ee-4816-931d-110b69a1103c-kube-api-access-vnqq2\") pod \"dnsmasq-dns-675f4bcbfc-shctx\" (UID: \"ef633f35-66ee-4816-931d-110b69a1103c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.056235 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.059891 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9dtzn"] Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.161223 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef633f35-66ee-4816-931d-110b69a1103c-config\") pod \"dnsmasq-dns-675f4bcbfc-shctx\" (UID: \"ef633f35-66ee-4816-931d-110b69a1103c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.161341 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnqq2\" (UniqueName: \"kubernetes.io/projected/ef633f35-66ee-4816-931d-110b69a1103c-kube-api-access-vnqq2\") pod \"dnsmasq-dns-675f4bcbfc-shctx\" (UID: \"ef633f35-66ee-4816-931d-110b69a1103c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.161414 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9dtzn\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.161452 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-config\") pod \"dnsmasq-dns-78dd6ddcc-9dtzn\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.161515 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbqg\" (UniqueName: \"kubernetes.io/projected/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-kube-api-access-gwbqg\") pod \"dnsmasq-dns-78dd6ddcc-9dtzn\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.162972 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef633f35-66ee-4816-931d-110b69a1103c-config\") pod \"dnsmasq-dns-675f4bcbfc-shctx\" (UID: \"ef633f35-66ee-4816-931d-110b69a1103c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.186081 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnqq2\" (UniqueName: \"kubernetes.io/projected/ef633f35-66ee-4816-931d-110b69a1103c-kube-api-access-vnqq2\") pod \"dnsmasq-dns-675f4bcbfc-shctx\" (UID: \"ef633f35-66ee-4816-931d-110b69a1103c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.262927 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9dtzn\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.263000 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-config\") pod \"dnsmasq-dns-78dd6ddcc-9dtzn\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.263049 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbqg\" (UniqueName: \"kubernetes.io/projected/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-kube-api-access-gwbqg\") pod \"dnsmasq-dns-78dd6ddcc-9dtzn\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.264076 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9dtzn\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.264404 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-config\") pod \"dnsmasq-dns-78dd6ddcc-9dtzn\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.294256 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbqg\" (UniqueName: \"kubernetes.io/projected/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-kube-api-access-gwbqg\") pod \"dnsmasq-dns-78dd6ddcc-9dtzn\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.300810 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.382249 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.874520 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-shctx"] Dec 02 14:02:31 crc kubenswrapper[4625]: I1202 14:02:31.932073 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9dtzn"] Dec 02 14:02:31 crc kubenswrapper[4625]: W1202 14:02:31.939617 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9bfdd78_d318_46ac_b5ff_eb0e1c0698ad.slice/crio-012752fa7f9c8eafa0b6c65db0d9b354bad15337bd412b0b8d2a8dd0453efd0c WatchSource:0}: Error finding container 012752fa7f9c8eafa0b6c65db0d9b354bad15337bd412b0b8d2a8dd0453efd0c: Status 404 returned error can't find the container with id 012752fa7f9c8eafa0b6c65db0d9b354bad15337bd412b0b8d2a8dd0453efd0c Dec 02 14:02:32 crc kubenswrapper[4625]: I1202 14:02:32.186691 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" event={"ID":"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad","Type":"ContainerStarted","Data":"012752fa7f9c8eafa0b6c65db0d9b354bad15337bd412b0b8d2a8dd0453efd0c"} Dec 02 14:02:32 crc kubenswrapper[4625]: I1202 14:02:32.187719 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" event={"ID":"ef633f35-66ee-4816-931d-110b69a1103c","Type":"ContainerStarted","Data":"d00f28f5b1fcee4613c8d44147f22c66a6963dd77bbbfb619a1533c9ea2233f3"} Dec 02 14:02:33 crc kubenswrapper[4625]: I1202 14:02:33.914416 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-shctx"] Dec 02 14:02:33 crc kubenswrapper[4625]: I1202 14:02:33.979780 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4tlvh"] Dec 02 14:02:33 crc kubenswrapper[4625]: I1202 14:02:33.982969 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:33 crc kubenswrapper[4625]: I1202 14:02:33.988973 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4tlvh"] Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.121323 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-config\") pod \"dnsmasq-dns-666b6646f7-4tlvh\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.121389 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f25j8\" (UniqueName: \"kubernetes.io/projected/6f10fd77-5735-4112-9cbf-5365df9a3c1e-kube-api-access-f25j8\") pod \"dnsmasq-dns-666b6646f7-4tlvh\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.121455 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4tlvh\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.224100 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-config\") pod \"dnsmasq-dns-666b6646f7-4tlvh\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.222914 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-config\") pod \"dnsmasq-dns-666b6646f7-4tlvh\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.224214 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f25j8\" (UniqueName: \"kubernetes.io/projected/6f10fd77-5735-4112-9cbf-5365df9a3c1e-kube-api-access-f25j8\") pod \"dnsmasq-dns-666b6646f7-4tlvh\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.224889 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4tlvh\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.225824 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4tlvh\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.273417 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f25j8\" (UniqueName: \"kubernetes.io/projected/6f10fd77-5735-4112-9cbf-5365df9a3c1e-kube-api-access-f25j8\") pod \"dnsmasq-dns-666b6646f7-4tlvh\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.320106 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.396575 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9dtzn"] Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.423543 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-thws2"] Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.425145 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.440588 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-thws2"] Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.536448 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-thws2\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.536553 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-config\") pod \"dnsmasq-dns-57d769cc4f-thws2\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.536744 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ddsp\" (UniqueName: \"kubernetes.io/projected/7fdeae5a-d9fa-49b9-b103-cb48db42df39-kube-api-access-8ddsp\") pod \"dnsmasq-dns-57d769cc4f-thws2\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.645164 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-thws2\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.645619 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-config\") pod \"dnsmasq-dns-57d769cc4f-thws2\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.645778 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ddsp\" (UniqueName: \"kubernetes.io/projected/7fdeae5a-d9fa-49b9-b103-cb48db42df39-kube-api-access-8ddsp\") pod \"dnsmasq-dns-57d769cc4f-thws2\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.646527 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-config\") pod \"dnsmasq-dns-57d769cc4f-thws2\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.646527 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-thws2\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.692671 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ddsp\" (UniqueName: \"kubernetes.io/projected/7fdeae5a-d9fa-49b9-b103-cb48db42df39-kube-api-access-8ddsp\") pod \"dnsmasq-dns-57d769cc4f-thws2\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:34 crc kubenswrapper[4625]: I1202 14:02:34.777963 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.281792 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.283786 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.294166 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.294471 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.294615 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.294766 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.294946 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-h7kl5" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.295108 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.295269 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.370819 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.370868 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.370900 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.371246 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtqvm\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-kube-api-access-xtqvm\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.371339 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.371387 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.371548 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-config-data\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.371599 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.371628 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.371657 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.371746 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.386927 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.444174 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4tlvh"] Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475337 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475391 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475426 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475448 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtqvm\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-kube-api-access-xtqvm\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475466 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475483 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475521 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-config-data\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475543 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475558 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475578 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.475607 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.479712 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.480105 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.482368 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.485496 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-config-data\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.485777 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.486186 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.560564 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.560869 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.564951 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.598213 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.622094 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtqvm\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-kube-api-access-xtqvm\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.631715 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.668342 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.693473 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-thws2"] Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.728159 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.730189 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.738857 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.739146 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.739276 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.739462 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.751497 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.751804 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-td5w5" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.751982 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.775806 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902371 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902419 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902441 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a251393-cf48-4d79-8e8d-b46d5e3c664b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902481 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a251393-cf48-4d79-8e8d-b46d5e3c664b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902529 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902559 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902596 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902618 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902640 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rbm2\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-kube-api-access-6rbm2\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902666 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:35 crc kubenswrapper[4625]: I1202 14:02:35.902687 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.005630 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a251393-cf48-4d79-8e8d-b46d5e3c664b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.005799 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.005884 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.005991 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.006026 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.007029 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.007070 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.007413 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.007601 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rbm2\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-kube-api-access-6rbm2\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.007873 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.007916 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.008152 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.008179 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.008229 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a251393-cf48-4d79-8e8d-b46d5e3c664b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.009500 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.010730 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.010758 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.023881 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.023924 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a251393-cf48-4d79-8e8d-b46d5e3c664b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.024574 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.030954 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a251393-cf48-4d79-8e8d-b46d5e3c664b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.045877 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rbm2\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-kube-api-access-6rbm2\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.121050 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.313488 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-thws2" event={"ID":"7fdeae5a-d9fa-49b9-b103-cb48db42df39","Type":"ContainerStarted","Data":"8fb9b30a1e80fe9746a92df8ae1b35a93c88f6e72c1a52ff37f1151afd44092d"} Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.326114 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" event={"ID":"6f10fd77-5735-4112-9cbf-5365df9a3c1e","Type":"ContainerStarted","Data":"76c41ffc22289ab3a5eb4a02bda2919ca9a65ff29a6749e61f46af5d2d5b13d3"} Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.378129 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:02:36 crc kubenswrapper[4625]: I1202 14:02:36.506816 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.142851 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.146263 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.148968 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.151738 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.151934 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.157265 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-frv4c" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.160081 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.177120 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.271541 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5hph\" (UniqueName: \"kubernetes.io/projected/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-kube-api-access-n5hph\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.271626 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.271663 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.271701 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.271723 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.271759 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.271998 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.272127 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.345360 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5","Type":"ContainerStarted","Data":"2dc4f5997d765ffc327d36b5e0cc62e8bc9f6a45d2c7c9709e688a5705ccc57e"} Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.376561 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.376632 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.376707 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5hph\" (UniqueName: \"kubernetes.io/projected/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-kube-api-access-n5hph\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.376739 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.376764 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.376785 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.376805 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.376831 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.378616 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.378901 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.379414 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.379570 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.379781 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.409092 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.459219 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.477261 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5hph\" (UniqueName: \"kubernetes.io/projected/2e108301-d560-49b4-a4b2-a2f45c2fa8fd-kube-api-access-n5hph\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: W1202 14:02:37.523867 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a251393_cf48_4d79_8e8d_b46d5e3c664b.slice/crio-a5c460a5b9b90ec61ba841009c1da3d99109a74de57744516da94ba3ee283665 WatchSource:0}: Error finding container a5c460a5b9b90ec61ba841009c1da3d99109a74de57744516da94ba3ee283665: Status 404 returned error can't find the container with id a5c460a5b9b90ec61ba841009c1da3d99109a74de57744516da94ba3ee283665 Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.536494 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.548455 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"2e108301-d560-49b4-a4b2-a2f45c2fa8fd\") " pod="openstack/openstack-galera-0" Dec 02 14:02:37 crc kubenswrapper[4625]: I1202 14:02:37.790919 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.063387 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.065412 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.069561 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.069974 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ttrxn" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.070226 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.070403 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.086537 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.240839 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/266c6414-c5b8-4dd2-939d-2386a0756d9c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.240987 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.241040 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4sv7\" (UniqueName: \"kubernetes.io/projected/266c6414-c5b8-4dd2-939d-2386a0756d9c-kube-api-access-z4sv7\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.241155 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/266c6414-c5b8-4dd2-939d-2386a0756d9c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.241189 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266c6414-c5b8-4dd2-939d-2386a0756d9c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.241249 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/266c6414-c5b8-4dd2-939d-2386a0756d9c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.241276 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/266c6414-c5b8-4dd2-939d-2386a0756d9c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.241350 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/266c6414-c5b8-4dd2-939d-2386a0756d9c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.346706 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/266c6414-c5b8-4dd2-939d-2386a0756d9c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.347255 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266c6414-c5b8-4dd2-939d-2386a0756d9c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.347302 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/266c6414-c5b8-4dd2-939d-2386a0756d9c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.347344 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/266c6414-c5b8-4dd2-939d-2386a0756d9c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.347422 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/266c6414-c5b8-4dd2-939d-2386a0756d9c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.347447 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/266c6414-c5b8-4dd2-939d-2386a0756d9c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.347514 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.347558 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4sv7\" (UniqueName: \"kubernetes.io/projected/266c6414-c5b8-4dd2-939d-2386a0756d9c-kube-api-access-z4sv7\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.352390 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.357985 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/266c6414-c5b8-4dd2-939d-2386a0756d9c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.360269 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/266c6414-c5b8-4dd2-939d-2386a0756d9c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.360586 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/266c6414-c5b8-4dd2-939d-2386a0756d9c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.363095 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/266c6414-c5b8-4dd2-939d-2386a0756d9c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.377060 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266c6414-c5b8-4dd2-939d-2386a0756d9c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.395323 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/266c6414-c5b8-4dd2-939d-2386a0756d9c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.399555 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4sv7\" (UniqueName: \"kubernetes.io/projected/266c6414-c5b8-4dd2-939d-2386a0756d9c-kube-api-access-z4sv7\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.461734 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"266c6414-c5b8-4dd2-939d-2386a0756d9c\") " pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.462122 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a251393-cf48-4d79-8e8d-b46d5e3c664b","Type":"ContainerStarted","Data":"a5c460a5b9b90ec61ba841009c1da3d99109a74de57744516da94ba3ee283665"} Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.572093 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.573604 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.583252 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.585870 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hrwc2" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.586234 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.612388 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.662894 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpxd6\" (UniqueName: \"kubernetes.io/projected/8f8269cf-38ac-4207-be57-909e352cb528-kube-api-access-gpxd6\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.662986 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f8269cf-38ac-4207-be57-909e352cb528-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.663019 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f8269cf-38ac-4207-be57-909e352cb528-kolla-config\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.663095 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f8269cf-38ac-4207-be57-909e352cb528-config-data\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.663141 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f8269cf-38ac-4207-be57-909e352cb528-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.726994 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.768893 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f8269cf-38ac-4207-be57-909e352cb528-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.768993 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpxd6\" (UniqueName: \"kubernetes.io/projected/8f8269cf-38ac-4207-be57-909e352cb528-kube-api-access-gpxd6\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.769045 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f8269cf-38ac-4207-be57-909e352cb528-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.769069 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f8269cf-38ac-4207-be57-909e352cb528-kolla-config\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.769139 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f8269cf-38ac-4207-be57-909e352cb528-config-data\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.770195 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f8269cf-38ac-4207-be57-909e352cb528-config-data\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.775522 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f8269cf-38ac-4207-be57-909e352cb528-kolla-config\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.778879 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f8269cf-38ac-4207-be57-909e352cb528-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.779487 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f8269cf-38ac-4207-be57-909e352cb528-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.800192 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpxd6\" (UniqueName: \"kubernetes.io/projected/8f8269cf-38ac-4207-be57-909e352cb528-kube-api-access-gpxd6\") pod \"memcached-0\" (UID: \"8f8269cf-38ac-4207-be57-909e352cb528\") " pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.968948 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 02 14:02:38 crc kubenswrapper[4625]: I1202 14:02:38.979663 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 02 14:02:39 crc kubenswrapper[4625]: W1202 14:02:39.184680 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e108301_d560_49b4_a4b2_a2f45c2fa8fd.slice/crio-2bca33e664fee905781182cf164866182ca2c39db9c0cdbb2851605cbcb7d711 WatchSource:0}: Error finding container 2bca33e664fee905781182cf164866182ca2c39db9c0cdbb2851605cbcb7d711: Status 404 returned error can't find the container with id 2bca33e664fee905781182cf164866182ca2c39db9c0cdbb2851605cbcb7d711 Dec 02 14:02:39 crc kubenswrapper[4625]: I1202 14:02:39.546978 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2e108301-d560-49b4-a4b2-a2f45c2fa8fd","Type":"ContainerStarted","Data":"2bca33e664fee905781182cf164866182ca2c39db9c0cdbb2851605cbcb7d711"} Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.140679 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.324372 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.443270 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.459706 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.473109 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hk79v" Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.500121 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.525498 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddz9d\" (UniqueName: \"kubernetes.io/projected/42709959-9e14-4b40-8ae8-813bf5e41d5c-kube-api-access-ddz9d\") pod \"kube-state-metrics-0\" (UID: \"42709959-9e14-4b40-8ae8-813bf5e41d5c\") " pod="openstack/kube-state-metrics-0" Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.598586 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"266c6414-c5b8-4dd2-939d-2386a0756d9c","Type":"ContainerStarted","Data":"059561c5195a50771e33e06eced5d94f7c552a2d2c8610a8ff55fef266ecdec5"} Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.602709 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f8269cf-38ac-4207-be57-909e352cb528","Type":"ContainerStarted","Data":"29e8c73733002b439cf2b7929db2f5a68a073c97290eec1384ced30070ba4d32"} Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.642446 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddz9d\" (UniqueName: \"kubernetes.io/projected/42709959-9e14-4b40-8ae8-813bf5e41d5c-kube-api-access-ddz9d\") pod \"kube-state-metrics-0\" (UID: \"42709959-9e14-4b40-8ae8-813bf5e41d5c\") " pod="openstack/kube-state-metrics-0" Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.678572 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddz9d\" (UniqueName: \"kubernetes.io/projected/42709959-9e14-4b40-8ae8-813bf5e41d5c-kube-api-access-ddz9d\") pod \"kube-state-metrics-0\" (UID: \"42709959-9e14-4b40-8ae8-813bf5e41d5c\") " pod="openstack/kube-state-metrics-0" Dec 02 14:02:40 crc kubenswrapper[4625]: I1202 14:02:40.802289 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:02:41 crc kubenswrapper[4625]: I1202 14:02:41.576330 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:02:42 crc kubenswrapper[4625]: I1202 14:02:42.908032 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42709959-9e14-4b40-8ae8-813bf5e41d5c","Type":"ContainerStarted","Data":"ff63ce3e5ea4acf27d01488072948e972296c218e74cef963c5c634ac876e250"} Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.028470 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5hzbv"] Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.031059 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.034687 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-g46j2" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.034924 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.035184 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.067185 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5hzbv"] Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.133950 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fe58841-9566-4a48-9e44-6709020a943c-var-run-ovn\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.134089 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fe58841-9566-4a48-9e44-6709020a943c-scripts\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.136462 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe58841-9566-4a48-9e44-6709020a943c-combined-ca-bundle\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.136531 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vjcg\" (UniqueName: \"kubernetes.io/projected/3fe58841-9566-4a48-9e44-6709020a943c-kube-api-access-8vjcg\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.136561 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fe58841-9566-4a48-9e44-6709020a943c-var-run\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.136583 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fe58841-9566-4a48-9e44-6709020a943c-var-log-ovn\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.136675 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe58841-9566-4a48-9e44-6709020a943c-ovn-controller-tls-certs\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.176502 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bqzbz"] Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.186806 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.188566 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bqzbz"] Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.238399 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe58841-9566-4a48-9e44-6709020a943c-ovn-controller-tls-certs\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.238520 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fe58841-9566-4a48-9e44-6709020a943c-var-run-ovn\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.238569 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fe58841-9566-4a48-9e44-6709020a943c-scripts\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.238600 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe58841-9566-4a48-9e44-6709020a943c-combined-ca-bundle\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.238626 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vjcg\" (UniqueName: \"kubernetes.io/projected/3fe58841-9566-4a48-9e44-6709020a943c-kube-api-access-8vjcg\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.238642 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fe58841-9566-4a48-9e44-6709020a943c-var-run\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.238663 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fe58841-9566-4a48-9e44-6709020a943c-var-log-ovn\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.239453 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3fe58841-9566-4a48-9e44-6709020a943c-var-log-ovn\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.246152 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3fe58841-9566-4a48-9e44-6709020a943c-var-run-ovn\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.248041 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3fe58841-9566-4a48-9e44-6709020a943c-var-run\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.248678 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fe58841-9566-4a48-9e44-6709020a943c-scripts\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.260816 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe58841-9566-4a48-9e44-6709020a943c-ovn-controller-tls-certs\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.267742 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe58841-9566-4a48-9e44-6709020a943c-combined-ca-bundle\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.271129 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vjcg\" (UniqueName: \"kubernetes.io/projected/3fe58841-9566-4a48-9e44-6709020a943c-kube-api-access-8vjcg\") pod \"ovn-controller-5hzbv\" (UID: \"3fe58841-9566-4a48-9e44-6709020a943c\") " pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.340442 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-scripts\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.340548 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f74s5\" (UniqueName: \"kubernetes.io/projected/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-kube-api-access-f74s5\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.340634 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-etc-ovs\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.340674 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-var-run\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.340732 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-var-log\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.340757 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-var-lib\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.406378 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5hzbv" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.443182 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-scripts\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.443286 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f74s5\" (UniqueName: \"kubernetes.io/projected/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-kube-api-access-f74s5\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.443514 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-etc-ovs\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.443955 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-var-run\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.444028 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-etc-ovs\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.443550 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-var-run\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.444123 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-var-log\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.444142 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-var-lib\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.444357 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-var-lib\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.444398 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-var-log\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.447582 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-scripts\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.508596 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f74s5\" (UniqueName: \"kubernetes.io/projected/0be922f6-018c-4504-bc6a-f0c8dd53ce5b-kube-api-access-f74s5\") pod \"ovn-controller-ovs-bqzbz\" (UID: \"0be922f6-018c-4504-bc6a-f0c8dd53ce5b\") " pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:43 crc kubenswrapper[4625]: I1202 14:02:43.559893 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:02:44 crc kubenswrapper[4625]: I1202 14:02:44.818000 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5hzbv"] Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.014810 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5hzbv" event={"ID":"3fe58841-9566-4a48-9e44-6709020a943c","Type":"ContainerStarted","Data":"dbbf4e69d21e9acddf7ee1a1a26b83f2e96336d25d515a49fc200ac258aed288"} Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.061850 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.068080 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.080404 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.080639 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.080835 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-frxxf" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.081032 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.081190 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.086762 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.318729 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31013ff5-b8be-40fb-9e34-5eac74bd1849-config\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.318832 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31013ff5-b8be-40fb-9e34-5eac74bd1849-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.318859 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl6ls\" (UniqueName: \"kubernetes.io/projected/31013ff5-b8be-40fb-9e34-5eac74bd1849-kube-api-access-vl6ls\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.318877 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31013ff5-b8be-40fb-9e34-5eac74bd1849-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.318902 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31013ff5-b8be-40fb-9e34-5eac74bd1849-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.318934 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31013ff5-b8be-40fb-9e34-5eac74bd1849-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.318952 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.318970 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31013ff5-b8be-40fb-9e34-5eac74bd1849-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.422126 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31013ff5-b8be-40fb-9e34-5eac74bd1849-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.422192 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl6ls\" (UniqueName: \"kubernetes.io/projected/31013ff5-b8be-40fb-9e34-5eac74bd1849-kube-api-access-vl6ls\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.422214 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31013ff5-b8be-40fb-9e34-5eac74bd1849-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.422258 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31013ff5-b8be-40fb-9e34-5eac74bd1849-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.422337 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31013ff5-b8be-40fb-9e34-5eac74bd1849-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.422371 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.422388 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31013ff5-b8be-40fb-9e34-5eac74bd1849-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.422484 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31013ff5-b8be-40fb-9e34-5eac74bd1849-config\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.423850 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31013ff5-b8be-40fb-9e34-5eac74bd1849-config\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.424180 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31013ff5-b8be-40fb-9e34-5eac74bd1849-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.424282 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31013ff5-b8be-40fb-9e34-5eac74bd1849-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.424687 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.434719 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31013ff5-b8be-40fb-9e34-5eac74bd1849-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.438897 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/31013ff5-b8be-40fb-9e34-5eac74bd1849-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.446274 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31013ff5-b8be-40fb-9e34-5eac74bd1849-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.489830 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl6ls\" (UniqueName: \"kubernetes.io/projected/31013ff5-b8be-40fb-9e34-5eac74bd1849-kube-api-access-vl6ls\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.523038 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"31013ff5-b8be-40fb-9e34-5eac74bd1849\") " pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:45 crc kubenswrapper[4625]: I1202 14:02:45.710126 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 02 14:02:46 crc kubenswrapper[4625]: I1202 14:02:46.542691 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bqzbz"] Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.013799 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ph9m2"] Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.033501 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ph9m2"] Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.033678 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.050716 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.186846 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh9b2\" (UniqueName: \"kubernetes.io/projected/2bef51af-8aba-4e71-a607-69b0e7facae6-kube-api-access-dh9b2\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.186945 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bef51af-8aba-4e71-a607-69b0e7facae6-config\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.187000 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2bef51af-8aba-4e71-a607-69b0e7facae6-ovn-rundir\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.187054 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bef51af-8aba-4e71-a607-69b0e7facae6-combined-ca-bundle\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.187085 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2bef51af-8aba-4e71-a607-69b0e7facae6-ovs-rundir\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.187145 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bef51af-8aba-4e71-a607-69b0e7facae6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.261162 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.289389 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bef51af-8aba-4e71-a607-69b0e7facae6-config\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.289465 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2bef51af-8aba-4e71-a607-69b0e7facae6-ovn-rundir\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.289504 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bef51af-8aba-4e71-a607-69b0e7facae6-combined-ca-bundle\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.289531 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2bef51af-8aba-4e71-a607-69b0e7facae6-ovs-rundir\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.289580 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bef51af-8aba-4e71-a607-69b0e7facae6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.289625 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh9b2\" (UniqueName: \"kubernetes.io/projected/2bef51af-8aba-4e71-a607-69b0e7facae6-kube-api-access-dh9b2\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.296761 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bef51af-8aba-4e71-a607-69b0e7facae6-config\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.297163 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2bef51af-8aba-4e71-a607-69b0e7facae6-ovn-rundir\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.300379 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2bef51af-8aba-4e71-a607-69b0e7facae6-ovs-rundir\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.330824 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh9b2\" (UniqueName: \"kubernetes.io/projected/2bef51af-8aba-4e71-a607-69b0e7facae6-kube-api-access-dh9b2\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.331771 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bef51af-8aba-4e71-a607-69b0e7facae6-combined-ca-bundle\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.346506 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bef51af-8aba-4e71-a607-69b0e7facae6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ph9m2\" (UID: \"2bef51af-8aba-4e71-a607-69b0e7facae6\") " pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.377885 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ph9m2" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.492579 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4tlvh"] Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.556762 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kpd7h"] Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.582711 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.606433 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kpd7h"] Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.641096 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.701772 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92bdw\" (UniqueName: \"kubernetes.io/projected/b663e5bc-58df-4629-b207-376cc825d4e4-kube-api-access-92bdw\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.701953 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.702082 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.702121 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-config\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.804339 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.804420 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.804460 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-config\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.804524 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92bdw\" (UniqueName: \"kubernetes.io/projected/b663e5bc-58df-4629-b207-376cc825d4e4-kube-api-access-92bdw\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.805902 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.805945 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-config\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.806796 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.831799 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92bdw\" (UniqueName: \"kubernetes.io/projected/b663e5bc-58df-4629-b207-376cc825d4e4-kube-api-access-92bdw\") pod \"dnsmasq-dns-7fd796d7df-kpd7h\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:47 crc kubenswrapper[4625]: I1202 14:02:47.978190 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.187140 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.189281 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.196748 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.197637 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.197758 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-k9rzw" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.197698 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.200807 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.327256 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3295e090-e4c0-4c88-a3aa-9d938e0b541d-config\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.327687 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.327782 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3295e090-e4c0-4c88-a3aa-9d938e0b541d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.327878 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94pq\" (UniqueName: \"kubernetes.io/projected/3295e090-e4c0-4c88-a3aa-9d938e0b541d-kube-api-access-r94pq\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.328017 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3295e090-e4c0-4c88-a3aa-9d938e0b541d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.328127 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3295e090-e4c0-4c88-a3aa-9d938e0b541d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.328280 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3295e090-e4c0-4c88-a3aa-9d938e0b541d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.329109 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3295e090-e4c0-4c88-a3aa-9d938e0b541d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.431404 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3295e090-e4c0-4c88-a3aa-9d938e0b541d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.431495 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r94pq\" (UniqueName: \"kubernetes.io/projected/3295e090-e4c0-4c88-a3aa-9d938e0b541d-kube-api-access-r94pq\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.431540 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3295e090-e4c0-4c88-a3aa-9d938e0b541d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.431620 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3295e090-e4c0-4c88-a3aa-9d938e0b541d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.431683 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3295e090-e4c0-4c88-a3aa-9d938e0b541d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.431741 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3295e090-e4c0-4c88-a3aa-9d938e0b541d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.431803 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3295e090-e4c0-4c88-a3aa-9d938e0b541d-config\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.431843 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.432245 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.434262 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3295e090-e4c0-4c88-a3aa-9d938e0b541d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.435084 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3295e090-e4c0-4c88-a3aa-9d938e0b541d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.437803 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3295e090-e4c0-4c88-a3aa-9d938e0b541d-config\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.443152 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3295e090-e4c0-4c88-a3aa-9d938e0b541d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.460511 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3295e090-e4c0-4c88-a3aa-9d938e0b541d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.471624 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3295e090-e4c0-4c88-a3aa-9d938e0b541d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.510859 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94pq\" (UniqueName: \"kubernetes.io/projected/3295e090-e4c0-4c88-a3aa-9d938e0b541d-kube-api-access-r94pq\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.526118 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3295e090-e4c0-4c88-a3aa-9d938e0b541d\") " pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:48 crc kubenswrapper[4625]: I1202 14:02:48.825095 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 02 14:02:58 crc kubenswrapper[4625]: I1202 14:02:58.205547 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"31013ff5-b8be-40fb-9e34-5eac74bd1849","Type":"ContainerStarted","Data":"e0df8817efef9d0e3e4bcae18fc4dbdf82dadf388ac89bcc460906b0ad703261"} Dec 02 14:03:01 crc kubenswrapper[4625]: W1202 14:03:01.268764 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0be922f6_018c_4504_bc6a_f0c8dd53ce5b.slice/crio-cbe37c5e12ca08034805941a941833ea7ffdaec8c98ce4350d9ee9e7bc5c00c0 WatchSource:0}: Error finding container cbe37c5e12ca08034805941a941833ea7ffdaec8c98ce4350d9ee9e7bc5c00c0: Status 404 returned error can't find the container with id cbe37c5e12ca08034805941a941833ea7ffdaec8c98ce4350d9ee9e7bc5c00c0 Dec 02 14:03:02 crc kubenswrapper[4625]: I1202 14:03:02.387645 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bqzbz" event={"ID":"0be922f6-018c-4504-bc6a-f0c8dd53ce5b","Type":"ContainerStarted","Data":"cbe37c5e12ca08034805941a941833ea7ffdaec8c98ce4350d9ee9e7bc5c00c0"} Dec 02 14:03:02 crc kubenswrapper[4625]: E1202 14:03:02.988440 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 02 14:03:02 crc kubenswrapper[4625]: E1202 14:03:02.988859 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtqvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(1ab3c28f-42ae-43ae-a6d7-10460f3da4c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:02 crc kubenswrapper[4625]: E1202 14:03:02.990454 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" Dec 02 14:03:02 crc kubenswrapper[4625]: E1202 14:03:02.999537 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 02 14:03:02 crc kubenswrapper[4625]: E1202 14:03:02.999784 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rbm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(5a251393-cf48-4d79-8e8d-b46d5e3c664b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:03 crc kubenswrapper[4625]: E1202 14:03:03.001342 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" Dec 02 14:03:03 crc kubenswrapper[4625]: E1202 14:03:03.398464 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" Dec 02 14:03:03 crc kubenswrapper[4625]: E1202 14:03:03.398464 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" Dec 02 14:03:08 crc kubenswrapper[4625]: E1202 14:03:08.193487 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659\": context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 02 14:03:08 crc kubenswrapper[4625]: E1202 14:03:08.194479 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4h5d9h585h69h9fhdbh598h54bh6chd6hbchb7hf7h7fhd6h56fh77h56ch576h56fh5bbh646h79h4h89h7fh68dh694h5bch5bh56h599q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vjcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-5hzbv_openstack(3fe58841-9566-4a48-9e44-6709020a943c): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659: Get \"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659\": context canceled" logger="UnhandledError" Dec 02 14:03:08 crc kubenswrapper[4625]: E1202 14:03:08.196261 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659: Get \\\"https://quay.io/v2/podified-antelope-centos9/openstack-ovn-controller/blobs/sha256:650ba1d1fb09dfb2a96481f2ebae84fbae4b3474d057dd6d1569ce094dc41659\\\": context canceled\"" pod="openstack/ovn-controller-5hzbv" podUID="3fe58841-9566-4a48-9e44-6709020a943c" Dec 02 14:03:08 crc kubenswrapper[4625]: E1202 14:03:08.443183 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-5hzbv" podUID="3fe58841-9566-4a48-9e44-6709020a943c" Dec 02 14:03:10 crc kubenswrapper[4625]: E1202 14:03:10.953975 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 02 14:03:10 crc kubenswrapper[4625]: E1202 14:03:10.954228 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4sv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(266c6414-c5b8-4dd2-939d-2386a0756d9c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:10 crc kubenswrapper[4625]: E1202 14:03:10.955511 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="266c6414-c5b8-4dd2-939d-2386a0756d9c" Dec 02 14:03:11 crc kubenswrapper[4625]: E1202 14:03:11.490434 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="266c6414-c5b8-4dd2-939d-2386a0756d9c" Dec 02 14:03:13 crc kubenswrapper[4625]: E1202 14:03:13.048422 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 02 14:03:13 crc kubenswrapper[4625]: E1202 14:03:13.049139 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nc6h555h647h684h566h58bh646h5bch5fdhc9h669hfhb7h687h58ch598h65bh85hdbh58h84h545h9h595h657h646h658h5d8h65h5f7h5bfh567q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpxd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(8f8269cf-38ac-4207-be57-909e352cb528): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:13 crc kubenswrapper[4625]: E1202 14:03:13.053402 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="8f8269cf-38ac-4207-be57-909e352cb528" Dec 02 14:03:13 crc kubenswrapper[4625]: E1202 14:03:13.075252 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 02 14:03:13 crc kubenswrapper[4625]: E1202 14:03:13.075637 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5hph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(2e108301-d560-49b4-a4b2-a2f45c2fa8fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:13 crc kubenswrapper[4625]: E1202 14:03:13.076833 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="2e108301-d560-49b4-a4b2-a2f45c2fa8fd" Dec 02 14:03:13 crc kubenswrapper[4625]: E1202 14:03:13.507078 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="8f8269cf-38ac-4207-be57-909e352cb528" Dec 02 14:03:13 crc kubenswrapper[4625]: E1202 14:03:13.507790 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="2e108301-d560-49b4-a4b2-a2f45c2fa8fd" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.008041 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.008256 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ddsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-thws2_openstack(7fdeae5a-d9fa-49b9-b103-cb48db42df39): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.009700 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-thws2" podUID="7fdeae5a-d9fa-49b9-b103-cb48db42df39" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.015051 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.015245 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnqq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-shctx_openstack(ef633f35-66ee-4816-931d-110b69a1103c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.016533 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" podUID="ef633f35-66ee-4816-931d-110b69a1103c" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.028346 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.028624 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f25j8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-4tlvh_openstack(6f10fd77-5735-4112-9cbf-5365df9a3c1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.030118 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" podUID="6f10fd77-5735-4112-9cbf-5365df9a3c1e" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.088563 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.088839 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwbqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9dtzn_openstack(b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.091426 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" podUID="b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad" Dec 02 14:03:14 crc kubenswrapper[4625]: E1202 14:03:14.517620 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-thws2" podUID="7fdeae5a-d9fa-49b9-b103-cb48db42df39" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.037952 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.044159 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.052815 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.100778 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnqq2\" (UniqueName: \"kubernetes.io/projected/ef633f35-66ee-4816-931d-110b69a1103c-kube-api-access-vnqq2\") pod \"ef633f35-66ee-4816-931d-110b69a1103c\" (UID: \"ef633f35-66ee-4816-931d-110b69a1103c\") " Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.100911 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef633f35-66ee-4816-931d-110b69a1103c-config\") pod \"ef633f35-66ee-4816-931d-110b69a1103c\" (UID: \"ef633f35-66ee-4816-931d-110b69a1103c\") " Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.100996 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f25j8\" (UniqueName: \"kubernetes.io/projected/6f10fd77-5735-4112-9cbf-5365df9a3c1e-kube-api-access-f25j8\") pod \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.101022 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-dns-svc\") pod \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.101066 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwbqg\" (UniqueName: \"kubernetes.io/projected/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-kube-api-access-gwbqg\") pod \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.101120 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-config\") pod \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.101164 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-config\") pod \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\" (UID: \"6f10fd77-5735-4112-9cbf-5365df9a3c1e\") " Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.101272 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-dns-svc\") pod \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\" (UID: \"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad\") " Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.102393 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad" (UID: "b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.111925 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef633f35-66ee-4816-931d-110b69a1103c-kube-api-access-vnqq2" (OuterVolumeSpecName: "kube-api-access-vnqq2") pod "ef633f35-66ee-4816-931d-110b69a1103c" (UID: "ef633f35-66ee-4816-931d-110b69a1103c"). InnerVolumeSpecName "kube-api-access-vnqq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.112474 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef633f35-66ee-4816-931d-110b69a1103c-config" (OuterVolumeSpecName: "config") pod "ef633f35-66ee-4816-931d-110b69a1103c" (UID: "ef633f35-66ee-4816-931d-110b69a1103c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.116149 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-config" (OuterVolumeSpecName: "config") pod "b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad" (UID: "b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.117577 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-config" (OuterVolumeSpecName: "config") pod "6f10fd77-5735-4112-9cbf-5365df9a3c1e" (UID: "6f10fd77-5735-4112-9cbf-5365df9a3c1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.117990 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f10fd77-5735-4112-9cbf-5365df9a3c1e" (UID: "6f10fd77-5735-4112-9cbf-5365df9a3c1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.120746 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f10fd77-5735-4112-9cbf-5365df9a3c1e-kube-api-access-f25j8" (OuterVolumeSpecName: "kube-api-access-f25j8") pod "6f10fd77-5735-4112-9cbf-5365df9a3c1e" (UID: "6f10fd77-5735-4112-9cbf-5365df9a3c1e"). InnerVolumeSpecName "kube-api-access-f25j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.122943 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-kube-api-access-gwbqg" (OuterVolumeSpecName: "kube-api-access-gwbqg") pod "b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad" (UID: "b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad"). InnerVolumeSpecName "kube-api-access-gwbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.204186 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.204344 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwbqg\" (UniqueName: \"kubernetes.io/projected/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-kube-api-access-gwbqg\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.204376 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.204402 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f10fd77-5735-4112-9cbf-5365df9a3c1e-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.204415 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.204427 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnqq2\" (UniqueName: \"kubernetes.io/projected/ef633f35-66ee-4816-931d-110b69a1103c-kube-api-access-vnqq2\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.204441 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef633f35-66ee-4816-931d-110b69a1103c-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.204457 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f25j8\" (UniqueName: \"kubernetes.io/projected/6f10fd77-5735-4112-9cbf-5365df9a3c1e-kube-api-access-f25j8\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.492034 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kpd7h"] Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.554884 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.554876 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9dtzn" event={"ID":"b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad","Type":"ContainerDied","Data":"012752fa7f9c8eafa0b6c65db0d9b354bad15337bd412b0b8d2a8dd0453efd0c"} Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.556915 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" event={"ID":"ef633f35-66ee-4816-931d-110b69a1103c","Type":"ContainerDied","Data":"d00f28f5b1fcee4613c8d44147f22c66a6963dd77bbbfb619a1533c9ea2233f3"} Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.557043 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-shctx" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.574252 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" event={"ID":"6f10fd77-5735-4112-9cbf-5365df9a3c1e","Type":"ContainerDied","Data":"76c41ffc22289ab3a5eb4a02bda2919ca9a65ff29a6749e61f46af5d2d5b13d3"} Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.574404 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4tlvh" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.663096 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9dtzn"] Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.677431 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9dtzn"] Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.791103 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-shctx"] Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.792231 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-shctx"] Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.849567 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4tlvh"] Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.916434 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad" path="/var/lib/kubelet/pods/b9bfdd78-d318-46ac-b5ff-eb0e1c0698ad/volumes" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.918286 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef633f35-66ee-4816-931d-110b69a1103c" path="/var/lib/kubelet/pods/ef633f35-66ee-4816-931d-110b69a1103c/volumes" Dec 02 14:03:16 crc kubenswrapper[4625]: I1202 14:03:16.918723 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4tlvh"] Dec 02 14:03:17 crc kubenswrapper[4625]: I1202 14:03:17.019835 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 02 14:03:17 crc kubenswrapper[4625]: I1202 14:03:17.160442 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ph9m2"] Dec 02 14:03:17 crc kubenswrapper[4625]: W1202 14:03:17.173656 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3295e090_e4c0_4c88_a3aa_9d938e0b541d.slice/crio-f0b3df4f30a8156262b7dc61b2aa90efb2c3ba64e8f0779afcbe1a4d273b0371 WatchSource:0}: Error finding container f0b3df4f30a8156262b7dc61b2aa90efb2c3ba64e8f0779afcbe1a4d273b0371: Status 404 returned error can't find the container with id f0b3df4f30a8156262b7dc61b2aa90efb2c3ba64e8f0779afcbe1a4d273b0371 Dec 02 14:03:17 crc kubenswrapper[4625]: W1202 14:03:17.181819 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bef51af_8aba_4e71_a607_69b0e7facae6.slice/crio-c29bd074f7b2471bf189178de707f35a6a7635afc97e8bbd6f1c748e2c4c4374 WatchSource:0}: Error finding container c29bd074f7b2471bf189178de707f35a6a7635afc97e8bbd6f1c748e2c4c4374: Status 404 returned error can't find the container with id c29bd074f7b2471bf189178de707f35a6a7635afc97e8bbd6f1c748e2c4c4374 Dec 02 14:03:17 crc kubenswrapper[4625]: I1202 14:03:17.589196 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3295e090-e4c0-4c88-a3aa-9d938e0b541d","Type":"ContainerStarted","Data":"f0b3df4f30a8156262b7dc61b2aa90efb2c3ba64e8f0779afcbe1a4d273b0371"} Dec 02 14:03:17 crc kubenswrapper[4625]: I1202 14:03:17.590881 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" event={"ID":"b663e5bc-58df-4629-b207-376cc825d4e4","Type":"ContainerStarted","Data":"defd95c6e89e93d0ae4389165ce4b490b9b5b392255823cfd250ef85e288cb58"} Dec 02 14:03:17 crc kubenswrapper[4625]: I1202 14:03:17.592463 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ph9m2" event={"ID":"2bef51af-8aba-4e71-a607-69b0e7facae6","Type":"ContainerStarted","Data":"c29bd074f7b2471bf189178de707f35a6a7635afc97e8bbd6f1c748e2c4c4374"} Dec 02 14:03:17 crc kubenswrapper[4625]: E1202 14:03:17.725657 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 02 14:03:17 crc kubenswrapper[4625]: E1202 14:03:17.726997 4625 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 02 14:03:17 crc kubenswrapper[4625]: E1202 14:03:17.727174 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ddz9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(42709959-9e14-4b40-8ae8-813bf5e41d5c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 14:03:17 crc kubenswrapper[4625]: E1202 14:03:17.728330 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="42709959-9e14-4b40-8ae8-813bf5e41d5c" Dec 02 14:03:18 crc kubenswrapper[4625]: I1202 14:03:18.602354 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"31013ff5-b8be-40fb-9e34-5eac74bd1849","Type":"ContainerStarted","Data":"75c2e97e909d0f066f2eb74a71f6dbdf683e1892c92bd4eb7899d1e25e9d2302"} Dec 02 14:03:18 crc kubenswrapper[4625]: I1202 14:03:18.604673 4625 generic.go:334] "Generic (PLEG): container finished" podID="b663e5bc-58df-4629-b207-376cc825d4e4" containerID="89f02f4b754a778427fc62db55be00510d30663068c0cc729abbb19bdf190347" exitCode=0 Dec 02 14:03:18 crc kubenswrapper[4625]: I1202 14:03:18.604745 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" event={"ID":"b663e5bc-58df-4629-b207-376cc825d4e4","Type":"ContainerDied","Data":"89f02f4b754a778427fc62db55be00510d30663068c0cc729abbb19bdf190347"} Dec 02 14:03:18 crc kubenswrapper[4625]: I1202 14:03:18.611788 4625 generic.go:334] "Generic (PLEG): container finished" podID="0be922f6-018c-4504-bc6a-f0c8dd53ce5b" containerID="eab872201d9c442de8e0412b2858c71cef3921765ab37325611ff60f377c9768" exitCode=0 Dec 02 14:03:18 crc kubenswrapper[4625]: I1202 14:03:18.612240 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bqzbz" event={"ID":"0be922f6-018c-4504-bc6a-f0c8dd53ce5b","Type":"ContainerDied","Data":"eab872201d9c442de8e0412b2858c71cef3921765ab37325611ff60f377c9768"} Dec 02 14:03:18 crc kubenswrapper[4625]: E1202 14:03:18.617273 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="42709959-9e14-4b40-8ae8-813bf5e41d5c" Dec 02 14:03:18 crc kubenswrapper[4625]: I1202 14:03:18.874992 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f10fd77-5735-4112-9cbf-5365df9a3c1e" path="/var/lib/kubelet/pods/6f10fd77-5735-4112-9cbf-5365df9a3c1e/volumes" Dec 02 14:03:19 crc kubenswrapper[4625]: I1202 14:03:19.626581 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3295e090-e4c0-4c88-a3aa-9d938e0b541d","Type":"ContainerStarted","Data":"b3323da36264e147fcc00bfcd8d4d0d737acff795054866a0d8d0bda326c49f4"} Dec 02 14:03:19 crc kubenswrapper[4625]: I1202 14:03:19.635527 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" event={"ID":"b663e5bc-58df-4629-b207-376cc825d4e4","Type":"ContainerStarted","Data":"076259a7c403efa323cebfbddbd1296f5cf80d2d8f4516c8f7563e54d4c36517"} Dec 02 14:03:19 crc kubenswrapper[4625]: I1202 14:03:19.635667 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:03:19 crc kubenswrapper[4625]: I1202 14:03:19.642417 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bqzbz" event={"ID":"0be922f6-018c-4504-bc6a-f0c8dd53ce5b","Type":"ContainerStarted","Data":"acabf22d0b75cc5a3a2896d74e71cc05a6767482b098d4ba265bd0c94f314235"} Dec 02 14:03:19 crc kubenswrapper[4625]: I1202 14:03:19.642480 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bqzbz" event={"ID":"0be922f6-018c-4504-bc6a-f0c8dd53ce5b","Type":"ContainerStarted","Data":"96e2f857c0d1892f19080b04755e902d086ed1cf1f79773d81f1c180ed3bf319"} Dec 02 14:03:19 crc kubenswrapper[4625]: I1202 14:03:19.642671 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:03:19 crc kubenswrapper[4625]: I1202 14:03:19.642710 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:03:19 crc kubenswrapper[4625]: I1202 14:03:19.644013 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a251393-cf48-4d79-8e8d-b46d5e3c664b","Type":"ContainerStarted","Data":"fab8eea7cfc9538032913f923cb15e255e6fdc6b7685be897462dd50245e0a2c"} Dec 02 14:03:19 crc kubenswrapper[4625]: I1202 14:03:19.659620 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" podStartSLOduration=31.396290175 podStartE2EDuration="32.659594231s" podCreationTimestamp="2025-12-02 14:02:47 +0000 UTC" firstStartedPulling="2025-12-02 14:03:16.537204969 +0000 UTC m=+1152.499382044" lastFinishedPulling="2025-12-02 14:03:17.800509025 +0000 UTC m=+1153.762686100" observedRunningTime="2025-12-02 14:03:19.65914886 +0000 UTC m=+1155.621325935" watchObservedRunningTime="2025-12-02 14:03:19.659594231 +0000 UTC m=+1155.621771306" Dec 02 14:03:19 crc kubenswrapper[4625]: I1202 14:03:19.716537 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bqzbz" podStartSLOduration=21.258485108 podStartE2EDuration="36.716505027s" podCreationTimestamp="2025-12-02 14:02:43 +0000 UTC" firstStartedPulling="2025-12-02 14:03:01.273200934 +0000 UTC m=+1137.235378009" lastFinishedPulling="2025-12-02 14:03:16.731220863 +0000 UTC m=+1152.693397928" observedRunningTime="2025-12-02 14:03:19.71182897 +0000 UTC m=+1155.674006045" watchObservedRunningTime="2025-12-02 14:03:19.716505027 +0000 UTC m=+1155.678682102" Dec 02 14:03:20 crc kubenswrapper[4625]: I1202 14:03:20.674036 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5","Type":"ContainerStarted","Data":"2b13cc239360571cb4d7c4f23f1286d1c18c0922b089c9d5b841f4446361788a"} Dec 02 14:03:22 crc kubenswrapper[4625]: I1202 14:03:22.704413 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ph9m2" event={"ID":"2bef51af-8aba-4e71-a607-69b0e7facae6","Type":"ContainerStarted","Data":"b76fdd475ced423d2f9dcc098aa976377c3dc2f9682feebe10d69c754bf8517a"} Dec 02 14:03:22 crc kubenswrapper[4625]: I1202 14:03:22.706873 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3295e090-e4c0-4c88-a3aa-9d938e0b541d","Type":"ContainerStarted","Data":"f8810fe4096da0a060b2155b5010915999f912003c9dc5b013e6c549903059ba"} Dec 02 14:03:22 crc kubenswrapper[4625]: I1202 14:03:22.710950 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"31013ff5-b8be-40fb-9e34-5eac74bd1849","Type":"ContainerStarted","Data":"4567ae0d0723c87f667fc0198b0dae7b6938971f460493f57a6a7103c09859b4"} Dec 02 14:03:22 crc kubenswrapper[4625]: I1202 14:03:22.749775 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ph9m2" podStartSLOduration=32.196897458 podStartE2EDuration="36.749750986s" podCreationTimestamp="2025-12-02 14:02:46 +0000 UTC" firstStartedPulling="2025-12-02 14:03:17.184067118 +0000 UTC m=+1153.146244193" lastFinishedPulling="2025-12-02 14:03:21.736920646 +0000 UTC m=+1157.699097721" observedRunningTime="2025-12-02 14:03:22.73210692 +0000 UTC m=+1158.694283995" watchObservedRunningTime="2025-12-02 14:03:22.749750986 +0000 UTC m=+1158.711928061" Dec 02 14:03:22 crc kubenswrapper[4625]: I1202 14:03:22.770177 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=31.149972403 podStartE2EDuration="35.770149907s" podCreationTimestamp="2025-12-02 14:02:47 +0000 UTC" firstStartedPulling="2025-12-02 14:03:17.177032358 +0000 UTC m=+1153.139209433" lastFinishedPulling="2025-12-02 14:03:21.797209862 +0000 UTC m=+1157.759386937" observedRunningTime="2025-12-02 14:03:22.769265662 +0000 UTC m=+1158.731442757" watchObservedRunningTime="2025-12-02 14:03:22.770149907 +0000 UTC m=+1158.732326982" Dec 02 14:03:22 crc kubenswrapper[4625]: I1202 14:03:22.823497 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.764944838 podStartE2EDuration="39.823466295s" podCreationTimestamp="2025-12-02 14:02:43 +0000 UTC" firstStartedPulling="2025-12-02 14:02:57.66362159 +0000 UTC m=+1133.625798665" lastFinishedPulling="2025-12-02 14:03:21.722143047 +0000 UTC m=+1157.684320122" observedRunningTime="2025-12-02 14:03:22.814988136 +0000 UTC m=+1158.777165211" watchObservedRunningTime="2025-12-02 14:03:22.823466295 +0000 UTC m=+1158.785643370" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.274171 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-thws2"] Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.327620 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-b5vwr"] Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.333013 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.338122 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.359962 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-b5vwr"] Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.365951 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-config\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.366004 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.366123 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.366211 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd4xv\" (UniqueName: \"kubernetes.io/projected/9efe65f2-6105-40f4-a716-c393933036a4-kube-api-access-wd4xv\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.366243 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.469294 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.469495 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd4xv\" (UniqueName: \"kubernetes.io/projected/9efe65f2-6105-40f4-a716-c393933036a4-kube-api-access-wd4xv\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.469520 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.469584 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-config\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.469609 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.470873 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.470946 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-config\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.471432 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.478785 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.532472 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd4xv\" (UniqueName: \"kubernetes.io/projected/9efe65f2-6105-40f4-a716-c393933036a4-kube-api-access-wd4xv\") pod \"dnsmasq-dns-86db49b7ff-b5vwr\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.656496 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.748180 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-thws2" event={"ID":"7fdeae5a-d9fa-49b9-b103-cb48db42df39","Type":"ContainerDied","Data":"8fb9b30a1e80fe9746a92df8ae1b35a93c88f6e72c1a52ff37f1151afd44092d"} Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.748238 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fb9b30a1e80fe9746a92df8ae1b35a93c88f6e72c1a52ff37f1151afd44092d" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.825881 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.869092 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.985493 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-config\") pod \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.986127 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-dns-svc\") pod \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.986155 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ddsp\" (UniqueName: \"kubernetes.io/projected/7fdeae5a-d9fa-49b9-b103-cb48db42df39-kube-api-access-8ddsp\") pod \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\" (UID: \"7fdeae5a-d9fa-49b9-b103-cb48db42df39\") " Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.987727 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-config" (OuterVolumeSpecName: "config") pod "7fdeae5a-d9fa-49b9-b103-cb48db42df39" (UID: "7fdeae5a-d9fa-49b9-b103-cb48db42df39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.988203 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7fdeae5a-d9fa-49b9-b103-cb48db42df39" (UID: "7fdeae5a-d9fa-49b9-b103-cb48db42df39"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:23 crc kubenswrapper[4625]: I1202 14:03:23.995480 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdeae5a-d9fa-49b9-b103-cb48db42df39-kube-api-access-8ddsp" (OuterVolumeSpecName: "kube-api-access-8ddsp") pod "7fdeae5a-d9fa-49b9-b103-cb48db42df39" (UID: "7fdeae5a-d9fa-49b9-b103-cb48db42df39"). InnerVolumeSpecName "kube-api-access-8ddsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.089828 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.089868 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fdeae5a-d9fa-49b9-b103-cb48db42df39-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.089879 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ddsp\" (UniqueName: \"kubernetes.io/projected/7fdeae5a-d9fa-49b9-b103-cb48db42df39-kube-api-access-8ddsp\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.291904 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-b5vwr"] Dec 02 14:03:24 crc kubenswrapper[4625]: W1202 14:03:24.296412 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9efe65f2_6105_40f4_a716_c393933036a4.slice/crio-93f729977324fa1b84b4373e4158915ab40c9d1e3034db481db30e0b8279c91a WatchSource:0}: Error finding container 93f729977324fa1b84b4373e4158915ab40c9d1e3034db481db30e0b8279c91a: Status 404 returned error can't find the container with id 93f729977324fa1b84b4373e4158915ab40c9d1e3034db481db30e0b8279c91a Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.710346 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.757774 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.759370 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5hzbv" event={"ID":"3fe58841-9566-4a48-9e44-6709020a943c","Type":"ContainerStarted","Data":"05b03ed496605dddb634ada02d4504d7021398473a26296fc0ed0c0b4ed41aff"} Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.759595 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5hzbv" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.762853 4625 generic.go:334] "Generic (PLEG): container finished" podID="9efe65f2-6105-40f4-a716-c393933036a4" containerID="a537a2df9660fec4ab4d68b91237ca1b6321c30b931bdb07edd757a7430debdd" exitCode=0 Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.763219 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" event={"ID":"9efe65f2-6105-40f4-a716-c393933036a4","Type":"ContainerDied","Data":"a537a2df9660fec4ab4d68b91237ca1b6321c30b931bdb07edd757a7430debdd"} Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.763269 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" event={"ID":"9efe65f2-6105-40f4-a716-c393933036a4","Type":"ContainerStarted","Data":"93f729977324fa1b84b4373e4158915ab40c9d1e3034db481db30e0b8279c91a"} Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.764141 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-thws2" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.764577 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.826485 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.832945 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5hzbv" podStartSLOduration=3.9360913870000003 podStartE2EDuration="42.832892996s" podCreationTimestamp="2025-12-02 14:02:42 +0000 UTC" firstStartedPulling="2025-12-02 14:02:44.86295708 +0000 UTC m=+1120.825134155" lastFinishedPulling="2025-12-02 14:03:23.759758679 +0000 UTC m=+1159.721935764" observedRunningTime="2025-12-02 14:03:24.819858125 +0000 UTC m=+1160.782035210" watchObservedRunningTime="2025-12-02 14:03:24.832892996 +0000 UTC m=+1160.795070071" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.901278 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 02 14:03:24 crc kubenswrapper[4625]: I1202 14:03:24.918363 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 02 14:03:25 crc kubenswrapper[4625]: I1202 14:03:25.015624 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-thws2"] Dec 02 14:03:25 crc kubenswrapper[4625]: I1202 14:03:25.075294 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-thws2"] Dec 02 14:03:25 crc kubenswrapper[4625]: I1202 14:03:25.775812 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"266c6414-c5b8-4dd2-939d-2386a0756d9c","Type":"ContainerStarted","Data":"44e4a1d0a0e7923331dd64b20f405fed09a39b87fd9b769962872d45b41c9448"} Dec 02 14:03:25 crc kubenswrapper[4625]: I1202 14:03:25.782535 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" event={"ID":"9efe65f2-6105-40f4-a716-c393933036a4","Type":"ContainerStarted","Data":"620ead4dfd5b4b02d3e7b56e768708855cb2bf508589103e2964120aa402d63c"} Dec 02 14:03:25 crc kubenswrapper[4625]: I1202 14:03:25.830957 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 02 14:03:25 crc kubenswrapper[4625]: I1202 14:03:25.839548 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" podStartSLOduration=2.839524219 podStartE2EDuration="2.839524219s" podCreationTimestamp="2025-12-02 14:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:25.837731671 +0000 UTC m=+1161.799908746" watchObservedRunningTime="2025-12-02 14:03:25.839524219 +0000 UTC m=+1161.801701294" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.122301 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.124375 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.129083 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.129135 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qjjxl" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.129083 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.129693 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.136855 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bda2bc-c054-4188-ad43-47b49dab4949-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.136943 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8bda2bc-c054-4188-ad43-47b49dab4949-scripts\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.136971 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bda2bc-c054-4188-ad43-47b49dab4949-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.136994 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8bda2bc-c054-4188-ad43-47b49dab4949-config\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.137027 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bda2bc-c054-4188-ad43-47b49dab4949-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.137060 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f8bda2bc-c054-4188-ad43-47b49dab4949-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.137101 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq67k\" (UniqueName: \"kubernetes.io/projected/f8bda2bc-c054-4188-ad43-47b49dab4949-kube-api-access-jq67k\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.173354 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.239367 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bda2bc-c054-4188-ad43-47b49dab4949-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.239479 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8bda2bc-c054-4188-ad43-47b49dab4949-scripts\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.239518 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bda2bc-c054-4188-ad43-47b49dab4949-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.239579 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8bda2bc-c054-4188-ad43-47b49dab4949-config\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.239623 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bda2bc-c054-4188-ad43-47b49dab4949-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.239667 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f8bda2bc-c054-4188-ad43-47b49dab4949-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.239746 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq67k\" (UniqueName: \"kubernetes.io/projected/f8bda2bc-c054-4188-ad43-47b49dab4949-kube-api-access-jq67k\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.240916 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8bda2bc-c054-4188-ad43-47b49dab4949-scripts\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.241511 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8bda2bc-c054-4188-ad43-47b49dab4949-config\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.242044 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f8bda2bc-c054-4188-ad43-47b49dab4949-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.249186 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bda2bc-c054-4188-ad43-47b49dab4949-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.262360 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bda2bc-c054-4188-ad43-47b49dab4949-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.263620 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bda2bc-c054-4188-ad43-47b49dab4949-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.283486 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq67k\" (UniqueName: \"kubernetes.io/projected/f8bda2bc-c054-4188-ad43-47b49dab4949-kube-api-access-jq67k\") pod \"ovn-northd-0\" (UID: \"f8bda2bc-c054-4188-ad43-47b49dab4949\") " pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.442720 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.794649 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f8269cf-38ac-4207-be57-909e352cb528","Type":"ContainerStarted","Data":"942980db4ece5bb1f35dd2f5a44368117e55acc59f5681ed30e04872c762fb75"} Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.797645 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.834514 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.853819708 podStartE2EDuration="48.834483736s" podCreationTimestamp="2025-12-02 14:02:38 +0000 UTC" firstStartedPulling="2025-12-02 14:02:40.389949676 +0000 UTC m=+1116.352126761" lastFinishedPulling="2025-12-02 14:03:26.370613714 +0000 UTC m=+1162.332790789" observedRunningTime="2025-12-02 14:03:26.825838733 +0000 UTC m=+1162.788015818" watchObservedRunningTime="2025-12-02 14:03:26.834483736 +0000 UTC m=+1162.796660811" Dec 02 14:03:26 crc kubenswrapper[4625]: I1202 14:03:26.881854 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fdeae5a-d9fa-49b9-b103-cb48db42df39" path="/var/lib/kubelet/pods/7fdeae5a-d9fa-49b9-b103-cb48db42df39/volumes" Dec 02 14:03:27 crc kubenswrapper[4625]: I1202 14:03:27.028799 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 02 14:03:27 crc kubenswrapper[4625]: I1202 14:03:27.812920 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f8bda2bc-c054-4188-ad43-47b49dab4949","Type":"ContainerStarted","Data":"8b1feb71710493bfd97aec7c330e83df20c551ee851ba7698b7e06be5213fb71"} Dec 02 14:03:27 crc kubenswrapper[4625]: I1202 14:03:27.819680 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2e108301-d560-49b4-a4b2-a2f45c2fa8fd","Type":"ContainerStarted","Data":"e65db4974d62bab7e0e4093339587319983ceb5dc3d60c8f1832e77d97d9b067"} Dec 02 14:03:27 crc kubenswrapper[4625]: I1202 14:03:27.981664 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:03:28 crc kubenswrapper[4625]: I1202 14:03:28.970079 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 02 14:03:29 crc kubenswrapper[4625]: I1202 14:03:29.839807 4625 generic.go:334] "Generic (PLEG): container finished" podID="266c6414-c5b8-4dd2-939d-2386a0756d9c" containerID="44e4a1d0a0e7923331dd64b20f405fed09a39b87fd9b769962872d45b41c9448" exitCode=0 Dec 02 14:03:29 crc kubenswrapper[4625]: I1202 14:03:29.839868 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"266c6414-c5b8-4dd2-939d-2386a0756d9c","Type":"ContainerDied","Data":"44e4a1d0a0e7923331dd64b20f405fed09a39b87fd9b769962872d45b41c9448"} Dec 02 14:03:29 crc kubenswrapper[4625]: I1202 14:03:29.843682 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f8bda2bc-c054-4188-ad43-47b49dab4949","Type":"ContainerStarted","Data":"57334881354c852a88f7e868fc3625365dfb0e4f2bd5216155f62b22b162942e"} Dec 02 14:03:29 crc kubenswrapper[4625]: I1202 14:03:29.843730 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f8bda2bc-c054-4188-ad43-47b49dab4949","Type":"ContainerStarted","Data":"916a636396532e690c83066bec7e846e01f8cf13cf93c486a584906f7fc95ff5"} Dec 02 14:03:29 crc kubenswrapper[4625]: I1202 14:03:29.843989 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 02 14:03:29 crc kubenswrapper[4625]: I1202 14:03:29.901145 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.784677296 podStartE2EDuration="3.901122855s" podCreationTimestamp="2025-12-02 14:03:26 +0000 UTC" firstStartedPulling="2025-12-02 14:03:27.04300818 +0000 UTC m=+1163.005185255" lastFinishedPulling="2025-12-02 14:03:29.159453729 +0000 UTC m=+1165.121630814" observedRunningTime="2025-12-02 14:03:29.895226905 +0000 UTC m=+1165.857404000" watchObservedRunningTime="2025-12-02 14:03:29.901122855 +0000 UTC m=+1165.863299930" Dec 02 14:03:30 crc kubenswrapper[4625]: I1202 14:03:30.897466 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"266c6414-c5b8-4dd2-939d-2386a0756d9c","Type":"ContainerStarted","Data":"06c36c625830f5ba27291104995fe6e8492a6ad48790fba4d245a81d62c194f7"} Dec 02 14:03:30 crc kubenswrapper[4625]: I1202 14:03:30.988903 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.736649558 podStartE2EDuration="54.988872496s" podCreationTimestamp="2025-12-02 14:02:36 +0000 UTC" firstStartedPulling="2025-12-02 14:02:40.253471285 +0000 UTC m=+1116.215648360" lastFinishedPulling="2025-12-02 14:03:25.505694223 +0000 UTC m=+1161.467871298" observedRunningTime="2025-12-02 14:03:30.985574166 +0000 UTC m=+1166.947751241" watchObservedRunningTime="2025-12-02 14:03:30.988872496 +0000 UTC m=+1166.951049571" Dec 02 14:03:31 crc kubenswrapper[4625]: I1202 14:03:31.889266 4625 generic.go:334] "Generic (PLEG): container finished" podID="2e108301-d560-49b4-a4b2-a2f45c2fa8fd" containerID="e65db4974d62bab7e0e4093339587319983ceb5dc3d60c8f1832e77d97d9b067" exitCode=0 Dec 02 14:03:31 crc kubenswrapper[4625]: I1202 14:03:31.889349 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2e108301-d560-49b4-a4b2-a2f45c2fa8fd","Type":"ContainerDied","Data":"e65db4974d62bab7e0e4093339587319983ceb5dc3d60c8f1832e77d97d9b067"} Dec 02 14:03:32 crc kubenswrapper[4625]: I1202 14:03:32.898678 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2e108301-d560-49b4-a4b2-a2f45c2fa8fd","Type":"ContainerStarted","Data":"4d1476d61fb14b654742f2fc2d19f0f9168e5f527c8ccbdf7cf24381b148815e"} Dec 02 14:03:32 crc kubenswrapper[4625]: I1202 14:03:32.900583 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42709959-9e14-4b40-8ae8-813bf5e41d5c","Type":"ContainerStarted","Data":"a57f16e865835e2132feea53984f85f5298955487893b60dee476939732657b3"} Dec 02 14:03:32 crc kubenswrapper[4625]: I1202 14:03:32.900961 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 02 14:03:32 crc kubenswrapper[4625]: I1202 14:03:32.921376 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.36309453 podStartE2EDuration="52.921345641s" podCreationTimestamp="2025-12-02 14:02:40 +0000 UTC" firstStartedPulling="2025-12-02 14:02:41.708098122 +0000 UTC m=+1117.670275197" lastFinishedPulling="2025-12-02 14:03:32.266349233 +0000 UTC m=+1168.228526308" observedRunningTime="2025-12-02 14:03:32.921185307 +0000 UTC m=+1168.883362392" watchObservedRunningTime="2025-12-02 14:03:32.921345641 +0000 UTC m=+1168.883522716" Dec 02 14:03:32 crc kubenswrapper[4625]: I1202 14:03:32.962023 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371979.892786 podStartE2EDuration="56.961989547s" podCreationTimestamp="2025-12-02 14:02:36 +0000 UTC" firstStartedPulling="2025-12-02 14:02:39.216565786 +0000 UTC m=+1115.178742861" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:32.95023835 +0000 UTC m=+1168.912415445" watchObservedRunningTime="2025-12-02 14:03:32.961989547 +0000 UTC m=+1168.924166622" Dec 02 14:03:33 crc kubenswrapper[4625]: I1202 14:03:33.660875 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:33 crc kubenswrapper[4625]: I1202 14:03:33.972795 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 02 14:03:34 crc kubenswrapper[4625]: I1202 14:03:34.183761 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kpd7h"] Dec 02 14:03:34 crc kubenswrapper[4625]: I1202 14:03:34.184114 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" podUID="b663e5bc-58df-4629-b207-376cc825d4e4" containerName="dnsmasq-dns" containerID="cri-o://076259a7c403efa323cebfbddbd1296f5cf80d2d8f4516c8f7563e54d4c36517" gracePeriod=10 Dec 02 14:03:35 crc kubenswrapper[4625]: I1202 14:03:35.141986 4625 generic.go:334] "Generic (PLEG): container finished" podID="b663e5bc-58df-4629-b207-376cc825d4e4" containerID="076259a7c403efa323cebfbddbd1296f5cf80d2d8f4516c8f7563e54d4c36517" exitCode=0 Dec 02 14:03:35 crc kubenswrapper[4625]: I1202 14:03:35.142072 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" event={"ID":"b663e5bc-58df-4629-b207-376cc825d4e4","Type":"ContainerDied","Data":"076259a7c403efa323cebfbddbd1296f5cf80d2d8f4516c8f7563e54d4c36517"} Dec 02 14:03:37 crc kubenswrapper[4625]: I1202 14:03:37.793060 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 02 14:03:37 crc kubenswrapper[4625]: I1202 14:03:37.793599 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 02 14:03:37 crc kubenswrapper[4625]: I1202 14:03:37.980075 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" podUID="b663e5bc-58df-4629-b207-376cc825d4e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Dec 02 14:03:38 crc kubenswrapper[4625]: I1202 14:03:38.728417 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 02 14:03:38 crc kubenswrapper[4625]: I1202 14:03:38.728910 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 02 14:03:38 crc kubenswrapper[4625]: I1202 14:03:38.834357 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:03:38 crc kubenswrapper[4625]: I1202 14:03:38.839781 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 02 14:03:38 crc kubenswrapper[4625]: I1202 14:03:38.940478 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-ovsdbserver-nb\") pod \"b663e5bc-58df-4629-b207-376cc825d4e4\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " Dec 02 14:03:38 crc kubenswrapper[4625]: I1202 14:03:38.940931 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92bdw\" (UniqueName: \"kubernetes.io/projected/b663e5bc-58df-4629-b207-376cc825d4e4-kube-api-access-92bdw\") pod \"b663e5bc-58df-4629-b207-376cc825d4e4\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " Dec 02 14:03:38 crc kubenswrapper[4625]: I1202 14:03:38.941089 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-config\") pod \"b663e5bc-58df-4629-b207-376cc825d4e4\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " Dec 02 14:03:38 crc kubenswrapper[4625]: I1202 14:03:38.941767 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-dns-svc\") pod \"b663e5bc-58df-4629-b207-376cc825d4e4\" (UID: \"b663e5bc-58df-4629-b207-376cc825d4e4\") " Dec 02 14:03:38 crc kubenswrapper[4625]: I1202 14:03:38.966284 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b663e5bc-58df-4629-b207-376cc825d4e4-kube-api-access-92bdw" (OuterVolumeSpecName: "kube-api-access-92bdw") pod "b663e5bc-58df-4629-b207-376cc825d4e4" (UID: "b663e5bc-58df-4629-b207-376cc825d4e4"). InnerVolumeSpecName "kube-api-access-92bdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:38 crc kubenswrapper[4625]: I1202 14:03:38.994254 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b663e5bc-58df-4629-b207-376cc825d4e4" (UID: "b663e5bc-58df-4629-b207-376cc825d4e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.001903 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-config" (OuterVolumeSpecName: "config") pod "b663e5bc-58df-4629-b207-376cc825d4e4" (UID: "b663e5bc-58df-4629-b207-376cc825d4e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.008904 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b663e5bc-58df-4629-b207-376cc825d4e4" (UID: "b663e5bc-58df-4629-b207-376cc825d4e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.044913 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.044962 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92bdw\" (UniqueName: \"kubernetes.io/projected/b663e5bc-58df-4629-b207-376cc825d4e4-kube-api-access-92bdw\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.044984 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.044993 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b663e5bc-58df-4629-b207-376cc825d4e4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.181575 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" event={"ID":"b663e5bc-58df-4629-b207-376cc825d4e4","Type":"ContainerDied","Data":"defd95c6e89e93d0ae4389165ce4b490b9b5b392255823cfd250ef85e288cb58"} Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.181619 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kpd7h" Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.181669 4625 scope.go:117] "RemoveContainer" containerID="076259a7c403efa323cebfbddbd1296f5cf80d2d8f4516c8f7563e54d4c36517" Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.231297 4625 scope.go:117] "RemoveContainer" containerID="89f02f4b754a778427fc62db55be00510d30663068c0cc729abbb19bdf190347" Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.245200 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kpd7h"] Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.256078 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kpd7h"] Dec 02 14:03:39 crc kubenswrapper[4625]: I1202 14:03:39.564216 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 02 14:03:40 crc kubenswrapper[4625]: I1202 14:03:40.813765 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 02 14:03:40 crc kubenswrapper[4625]: I1202 14:03:40.892116 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b663e5bc-58df-4629-b207-376cc825d4e4" path="/var/lib/kubelet/pods/b663e5bc-58df-4629-b207-376cc825d4e4/volumes" Dec 02 14:03:40 crc kubenswrapper[4625]: I1202 14:03:40.974241 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-72k88"] Dec 02 14:03:40 crc kubenswrapper[4625]: E1202 14:03:40.974695 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b663e5bc-58df-4629-b207-376cc825d4e4" containerName="dnsmasq-dns" Dec 02 14:03:40 crc kubenswrapper[4625]: I1202 14:03:40.974723 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b663e5bc-58df-4629-b207-376cc825d4e4" containerName="dnsmasq-dns" Dec 02 14:03:40 crc kubenswrapper[4625]: E1202 14:03:40.974756 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b663e5bc-58df-4629-b207-376cc825d4e4" containerName="init" Dec 02 14:03:40 crc kubenswrapper[4625]: I1202 14:03:40.974763 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b663e5bc-58df-4629-b207-376cc825d4e4" containerName="init" Dec 02 14:03:40 crc kubenswrapper[4625]: I1202 14:03:40.974925 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="b663e5bc-58df-4629-b207-376cc825d4e4" containerName="dnsmasq-dns" Dec 02 14:03:40 crc kubenswrapper[4625]: I1202 14:03:40.975897 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.068229 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-72k88"] Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.100832 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-dns-svc\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.101000 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.101067 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwcjp\" (UniqueName: \"kubernetes.io/projected/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-kube-api-access-jwcjp\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.101158 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.101195 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-config\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.203466 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-dns-svc\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.203554 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.203616 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwcjp\" (UniqueName: \"kubernetes.io/projected/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-kube-api-access-jwcjp\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.203684 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.203715 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-config\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.204791 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-config\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.206337 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-dns-svc\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.206974 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.207959 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.242339 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwcjp\" (UniqueName: \"kubernetes.io/projected/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-kube-api-access-jwcjp\") pod \"dnsmasq-dns-698758b865-72k88\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.318231 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.516892 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 02 14:03:41 crc kubenswrapper[4625]: I1202 14:03:41.862262 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-72k88"] Dec 02 14:03:41 crc kubenswrapper[4625]: W1202 14:03:41.872517 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5dab0a9_6481_4ef3_9462_c9a26c451ba9.slice/crio-b5dfc98ffdbfe98067a446a7001b82087d87939fdf5a41d22242ebef44b2b068 WatchSource:0}: Error finding container b5dfc98ffdbfe98067a446a7001b82087d87939fdf5a41d22242ebef44b2b068: Status 404 returned error can't find the container with id b5dfc98ffdbfe98067a446a7001b82087d87939fdf5a41d22242ebef44b2b068 Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.160025 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.170943 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.176457 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.176459 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.179201 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.180113 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4bpz2" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.218824 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-72k88" event={"ID":"c5dab0a9-6481-4ef3-9462-c9a26c451ba9","Type":"ContainerStarted","Data":"b5dfc98ffdbfe98067a446a7001b82087d87939fdf5a41d22242ebef44b2b068"} Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.224717 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.324778 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.325640 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfvzb\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-kube-api-access-qfvzb\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.325831 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2953913-1ab3-4821-ab7d-8a20cb58ad90-lock\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.326055 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.326250 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2953913-1ab3-4821-ab7d-8a20cb58ad90-cache\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.356415 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.431091 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.431193 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2953913-1ab3-4821-ab7d-8a20cb58ad90-cache\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.431283 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.431334 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfvzb\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-kube-api-access-qfvzb\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.431361 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2953913-1ab3-4821-ab7d-8a20cb58ad90-lock\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.431863 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2953913-1ab3-4821-ab7d-8a20cb58ad90-lock\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: E1202 14:03:42.432010 4625 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:03:42 crc kubenswrapper[4625]: E1202 14:03:42.432036 4625 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:03:42 crc kubenswrapper[4625]: E1202 14:03:42.432107 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift podName:b2953913-1ab3-4821-ab7d-8a20cb58ad90 nodeName:}" failed. No retries permitted until 2025-12-02 14:03:42.932069129 +0000 UTC m=+1178.894246204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift") pod "swift-storage-0" (UID: "b2953913-1ab3-4821-ab7d-8a20cb58ad90") : configmap "swift-ring-files" not found Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.432637 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2953913-1ab3-4821-ab7d-8a20cb58ad90-cache\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.433067 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.470209 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.471957 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfvzb\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-kube-api-access-qfvzb\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.538997 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.796472 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9hhrz"] Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.799686 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.802685 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.803076 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.803402 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.817411 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9hhrz"] Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.961730 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-ring-data-devices\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.962102 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.962273 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggn8\" (UniqueName: \"kubernetes.io/projected/b5858ebe-f677-4a48-b729-a8c4023b346d-kube-api-access-4ggn8\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.962397 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-scripts\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.962471 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-swiftconf\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.962502 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5858ebe-f677-4a48-b729-a8c4023b346d-etc-swift\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.962587 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-combined-ca-bundle\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:42 crc kubenswrapper[4625]: I1202 14:03:42.962625 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-dispersionconf\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:42 crc kubenswrapper[4625]: E1202 14:03:42.963707 4625 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:03:42 crc kubenswrapper[4625]: E1202 14:03:42.963726 4625 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:03:42 crc kubenswrapper[4625]: E1202 14:03:42.963782 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift podName:b2953913-1ab3-4821-ab7d-8a20cb58ad90 nodeName:}" failed. No retries permitted until 2025-12-02 14:03:43.963764231 +0000 UTC m=+1179.925941306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift") pod "swift-storage-0" (UID: "b2953913-1ab3-4821-ab7d-8a20cb58ad90") : configmap "swift-ring-files" not found Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.064716 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggn8\" (UniqueName: \"kubernetes.io/projected/b5858ebe-f677-4a48-b729-a8c4023b346d-kube-api-access-4ggn8\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.065818 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-scripts\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.065945 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-swiftconf\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.066030 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5858ebe-f677-4a48-b729-a8c4023b346d-etc-swift\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.066119 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-combined-ca-bundle\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.066207 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-dispersionconf\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.066427 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-ring-data-devices\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.066840 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-scripts\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.067483 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-ring-data-devices\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.068445 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5858ebe-f677-4a48-b729-a8c4023b346d-etc-swift\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.072279 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-dispersionconf\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.073598 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-combined-ca-bundle\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.081006 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-swiftconf\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.104956 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggn8\" (UniqueName: \"kubernetes.io/projected/b5858ebe-f677-4a48-b729-a8c4023b346d-kube-api-access-4ggn8\") pod \"swift-ring-rebalance-9hhrz\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.125137 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.253194 4625 generic.go:334] "Generic (PLEG): container finished" podID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerID="6b1c012b42008cd93af2bfd48d720b760679aabff26c98770df03c8325c20d77" exitCode=0 Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.253560 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-72k88" event={"ID":"c5dab0a9-6481-4ef3-9462-c9a26c451ba9","Type":"ContainerDied","Data":"6b1c012b42008cd93af2bfd48d720b760679aabff26c98770df03c8325c20d77"} Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.498497 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9hhrz"] Dec 02 14:03:43 crc kubenswrapper[4625]: I1202 14:03:43.990727 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:43 crc kubenswrapper[4625]: E1202 14:03:43.991011 4625 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:03:43 crc kubenswrapper[4625]: E1202 14:03:43.991360 4625 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:03:43 crc kubenswrapper[4625]: E1202 14:03:43.991452 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift podName:b2953913-1ab3-4821-ab7d-8a20cb58ad90 nodeName:}" failed. No retries permitted until 2025-12-02 14:03:45.991423951 +0000 UTC m=+1181.953601196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift") pod "swift-storage-0" (UID: "b2953913-1ab3-4821-ab7d-8a20cb58ad90") : configmap "swift-ring-files" not found Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.265239 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-72k88" event={"ID":"c5dab0a9-6481-4ef3-9462-c9a26c451ba9","Type":"ContainerStarted","Data":"42475adc2afe6413daadfca967f6430024b3163474318d831113c1c253391c87"} Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.265741 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.270527 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9hhrz" event={"ID":"b5858ebe-f677-4a48-b729-a8c4023b346d","Type":"ContainerStarted","Data":"34c834fe0528a2889a5d97eb708c1926a0e1f1050a2007803303f4a20a1dcdda"} Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.293983 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-72k88" podStartSLOduration=4.293957171 podStartE2EDuration="4.293957171s" podCreationTimestamp="2025-12-02 14:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:44.288983478 +0000 UTC m=+1180.251160563" watchObservedRunningTime="2025-12-02 14:03:44.293957171 +0000 UTC m=+1180.256134246" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.583688 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-b2wb8"] Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.594862 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b2wb8" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.619044 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7f64-account-create-update-v2ng7"] Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.620604 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7f64-account-create-update-v2ng7" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.623641 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.632054 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-b2wb8"] Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.673054 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7f64-account-create-update-v2ng7"] Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.707706 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-operator-scripts\") pod \"glance-7f64-account-create-update-v2ng7\" (UID: \"a7d9fa80-0d4f-4434-aae8-a25fffb891f2\") " pod="openstack/glance-7f64-account-create-update-v2ng7" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.707807 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpmg\" (UniqueName: \"kubernetes.io/projected/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-kube-api-access-dlpmg\") pod \"glance-7f64-account-create-update-v2ng7\" (UID: \"a7d9fa80-0d4f-4434-aae8-a25fffb891f2\") " pod="openstack/glance-7f64-account-create-update-v2ng7" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.707882 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxfgc\" (UniqueName: \"kubernetes.io/projected/88666e3c-ac66-4570-a1c1-f4062cae6533-kube-api-access-jxfgc\") pod \"glance-db-create-b2wb8\" (UID: \"88666e3c-ac66-4570-a1c1-f4062cae6533\") " pod="openstack/glance-db-create-b2wb8" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.708016 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88666e3c-ac66-4570-a1c1-f4062cae6533-operator-scripts\") pod \"glance-db-create-b2wb8\" (UID: \"88666e3c-ac66-4570-a1c1-f4062cae6533\") " pod="openstack/glance-db-create-b2wb8" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.810744 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlpmg\" (UniqueName: \"kubernetes.io/projected/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-kube-api-access-dlpmg\") pod \"glance-7f64-account-create-update-v2ng7\" (UID: \"a7d9fa80-0d4f-4434-aae8-a25fffb891f2\") " pod="openstack/glance-7f64-account-create-update-v2ng7" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.810823 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxfgc\" (UniqueName: \"kubernetes.io/projected/88666e3c-ac66-4570-a1c1-f4062cae6533-kube-api-access-jxfgc\") pod \"glance-db-create-b2wb8\" (UID: \"88666e3c-ac66-4570-a1c1-f4062cae6533\") " pod="openstack/glance-db-create-b2wb8" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.810922 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88666e3c-ac66-4570-a1c1-f4062cae6533-operator-scripts\") pod \"glance-db-create-b2wb8\" (UID: \"88666e3c-ac66-4570-a1c1-f4062cae6533\") " pod="openstack/glance-db-create-b2wb8" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.810970 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-operator-scripts\") pod \"glance-7f64-account-create-update-v2ng7\" (UID: \"a7d9fa80-0d4f-4434-aae8-a25fffb891f2\") " pod="openstack/glance-7f64-account-create-update-v2ng7" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.811774 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-operator-scripts\") pod \"glance-7f64-account-create-update-v2ng7\" (UID: \"a7d9fa80-0d4f-4434-aae8-a25fffb891f2\") " pod="openstack/glance-7f64-account-create-update-v2ng7" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.814955 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88666e3c-ac66-4570-a1c1-f4062cae6533-operator-scripts\") pod \"glance-db-create-b2wb8\" (UID: \"88666e3c-ac66-4570-a1c1-f4062cae6533\") " pod="openstack/glance-db-create-b2wb8" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.837447 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlpmg\" (UniqueName: \"kubernetes.io/projected/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-kube-api-access-dlpmg\") pod \"glance-7f64-account-create-update-v2ng7\" (UID: \"a7d9fa80-0d4f-4434-aae8-a25fffb891f2\") " pod="openstack/glance-7f64-account-create-update-v2ng7" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.841176 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxfgc\" (UniqueName: \"kubernetes.io/projected/88666e3c-ac66-4570-a1c1-f4062cae6533-kube-api-access-jxfgc\") pod \"glance-db-create-b2wb8\" (UID: \"88666e3c-ac66-4570-a1c1-f4062cae6533\") " pod="openstack/glance-db-create-b2wb8" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.929632 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b2wb8" Dec 02 14:03:44 crc kubenswrapper[4625]: I1202 14:03:44.949747 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7f64-account-create-update-v2ng7" Dec 02 14:03:45 crc kubenswrapper[4625]: I1202 14:03:45.525174 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-b2wb8"] Dec 02 14:03:45 crc kubenswrapper[4625]: I1202 14:03:45.654165 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7f64-account-create-update-v2ng7"] Dec 02 14:03:45 crc kubenswrapper[4625]: W1202 14:03:45.672670 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7d9fa80_0d4f_4434_aae8_a25fffb891f2.slice/crio-d6d6cc465c7e25d726fd0d9161fe5717d8c93cf845edaa6369e567ca53129247 WatchSource:0}: Error finding container d6d6cc465c7e25d726fd0d9161fe5717d8c93cf845edaa6369e567ca53129247: Status 404 returned error can't find the container with id d6d6cc465c7e25d726fd0d9161fe5717d8c93cf845edaa6369e567ca53129247 Dec 02 14:03:46 crc kubenswrapper[4625]: I1202 14:03:46.046407 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:46 crc kubenswrapper[4625]: E1202 14:03:46.046900 4625 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:03:46 crc kubenswrapper[4625]: E1202 14:03:46.046920 4625 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:03:46 crc kubenswrapper[4625]: E1202 14:03:46.046976 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift podName:b2953913-1ab3-4821-ab7d-8a20cb58ad90 nodeName:}" failed. No retries permitted until 2025-12-02 14:03:50.046961566 +0000 UTC m=+1186.009138641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift") pod "swift-storage-0" (UID: "b2953913-1ab3-4821-ab7d-8a20cb58ad90") : configmap "swift-ring-files" not found Dec 02 14:03:46 crc kubenswrapper[4625]: I1202 14:03:46.338900 4625 generic.go:334] "Generic (PLEG): container finished" podID="88666e3c-ac66-4570-a1c1-f4062cae6533" containerID="70d3f960f2975ca1143d1a70c4d63404ca8378daf82a55f630ffaba9fb28b594" exitCode=0 Dec 02 14:03:46 crc kubenswrapper[4625]: I1202 14:03:46.339005 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-b2wb8" event={"ID":"88666e3c-ac66-4570-a1c1-f4062cae6533","Type":"ContainerDied","Data":"70d3f960f2975ca1143d1a70c4d63404ca8378daf82a55f630ffaba9fb28b594"} Dec 02 14:03:46 crc kubenswrapper[4625]: I1202 14:03:46.339078 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-b2wb8" event={"ID":"88666e3c-ac66-4570-a1c1-f4062cae6533","Type":"ContainerStarted","Data":"998b8ca1123ef8da5737316cbeb3bd710f738fb52a184cd5f76eddf16e86bcbe"} Dec 02 14:03:46 crc kubenswrapper[4625]: I1202 14:03:46.343674 4625 generic.go:334] "Generic (PLEG): container finished" podID="a7d9fa80-0d4f-4434-aae8-a25fffb891f2" containerID="029171181afd062a1b5c55b62e7578673c03215f5a417be7a7c9bd392e1e2031" exitCode=0 Dec 02 14:03:46 crc kubenswrapper[4625]: I1202 14:03:46.343717 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7f64-account-create-update-v2ng7" event={"ID":"a7d9fa80-0d4f-4434-aae8-a25fffb891f2","Type":"ContainerDied","Data":"029171181afd062a1b5c55b62e7578673c03215f5a417be7a7c9bd392e1e2031"} Dec 02 14:03:46 crc kubenswrapper[4625]: I1202 14:03:46.343747 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7f64-account-create-update-v2ng7" event={"ID":"a7d9fa80-0d4f-4434-aae8-a25fffb891f2","Type":"ContainerStarted","Data":"d6d6cc465c7e25d726fd0d9161fe5717d8c93cf845edaa6369e567ca53129247"} Dec 02 14:03:47 crc kubenswrapper[4625]: I1202 14:03:47.989747 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b2wb8" Dec 02 14:03:47 crc kubenswrapper[4625]: I1202 14:03:47.998801 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7f64-account-create-update-v2ng7" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.011554 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlpmg\" (UniqueName: \"kubernetes.io/projected/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-kube-api-access-dlpmg\") pod \"a7d9fa80-0d4f-4434-aae8-a25fffb891f2\" (UID: \"a7d9fa80-0d4f-4434-aae8-a25fffb891f2\") " Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.012073 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxfgc\" (UniqueName: \"kubernetes.io/projected/88666e3c-ac66-4570-a1c1-f4062cae6533-kube-api-access-jxfgc\") pod \"88666e3c-ac66-4570-a1c1-f4062cae6533\" (UID: \"88666e3c-ac66-4570-a1c1-f4062cae6533\") " Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.012448 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-operator-scripts\") pod \"a7d9fa80-0d4f-4434-aae8-a25fffb891f2\" (UID: \"a7d9fa80-0d4f-4434-aae8-a25fffb891f2\") " Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.012920 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88666e3c-ac66-4570-a1c1-f4062cae6533-operator-scripts\") pod \"88666e3c-ac66-4570-a1c1-f4062cae6533\" (UID: \"88666e3c-ac66-4570-a1c1-f4062cae6533\") " Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.014865 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88666e3c-ac66-4570-a1c1-f4062cae6533-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88666e3c-ac66-4570-a1c1-f4062cae6533" (UID: "88666e3c-ac66-4570-a1c1-f4062cae6533"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.015751 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7d9fa80-0d4f-4434-aae8-a25fffb891f2" (UID: "a7d9fa80-0d4f-4434-aae8-a25fffb891f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.043917 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-kube-api-access-dlpmg" (OuterVolumeSpecName: "kube-api-access-dlpmg") pod "a7d9fa80-0d4f-4434-aae8-a25fffb891f2" (UID: "a7d9fa80-0d4f-4434-aae8-a25fffb891f2"). InnerVolumeSpecName "kube-api-access-dlpmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.044039 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88666e3c-ac66-4570-a1c1-f4062cae6533-kube-api-access-jxfgc" (OuterVolumeSpecName: "kube-api-access-jxfgc") pod "88666e3c-ac66-4570-a1c1-f4062cae6533" (UID: "88666e3c-ac66-4570-a1c1-f4062cae6533"). InnerVolumeSpecName "kube-api-access-jxfgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.116069 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxfgc\" (UniqueName: \"kubernetes.io/projected/88666e3c-ac66-4570-a1c1-f4062cae6533-kube-api-access-jxfgc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.116467 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.116549 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88666e3c-ac66-4570-a1c1-f4062cae6533-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.116699 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlpmg\" (UniqueName: \"kubernetes.io/projected/a7d9fa80-0d4f-4434-aae8-a25fffb891f2-kube-api-access-dlpmg\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.334359 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zlpv9"] Dec 02 14:03:48 crc kubenswrapper[4625]: E1202 14:03:48.343685 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88666e3c-ac66-4570-a1c1-f4062cae6533" containerName="mariadb-database-create" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.344174 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="88666e3c-ac66-4570-a1c1-f4062cae6533" containerName="mariadb-database-create" Dec 02 14:03:48 crc kubenswrapper[4625]: E1202 14:03:48.344439 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d9fa80-0d4f-4434-aae8-a25fffb891f2" containerName="mariadb-account-create-update" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.344556 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d9fa80-0d4f-4434-aae8-a25fffb891f2" containerName="mariadb-account-create-update" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.344992 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d9fa80-0d4f-4434-aae8-a25fffb891f2" containerName="mariadb-account-create-update" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.345183 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="88666e3c-ac66-4570-a1c1-f4062cae6533" containerName="mariadb-database-create" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.345831 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zlpv9"] Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.346025 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zlpv9" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.385979 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7f64-account-create-update-v2ng7" event={"ID":"a7d9fa80-0d4f-4434-aae8-a25fffb891f2","Type":"ContainerDied","Data":"d6d6cc465c7e25d726fd0d9161fe5717d8c93cf845edaa6369e567ca53129247"} Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.386036 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6d6cc465c7e25d726fd0d9161fe5717d8c93cf845edaa6369e567ca53129247" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.386003 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7f64-account-create-update-v2ng7" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.388455 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-b2wb8" event={"ID":"88666e3c-ac66-4570-a1c1-f4062cae6533","Type":"ContainerDied","Data":"998b8ca1123ef8da5737316cbeb3bd710f738fb52a184cd5f76eddf16e86bcbe"} Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.388533 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="998b8ca1123ef8da5737316cbeb3bd710f738fb52a184cd5f76eddf16e86bcbe" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.388563 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-b2wb8" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.421257 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjnzk\" (UniqueName: \"kubernetes.io/projected/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-kube-api-access-qjnzk\") pod \"keystone-db-create-zlpv9\" (UID: \"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c\") " pod="openstack/keystone-db-create-zlpv9" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.421459 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-operator-scripts\") pod \"keystone-db-create-zlpv9\" (UID: \"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c\") " pod="openstack/keystone-db-create-zlpv9" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.470385 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e870-account-create-update-jgjb6"] Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.471891 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e870-account-create-update-jgjb6" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.481876 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.489214 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e870-account-create-update-jgjb6"] Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.523813 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjnzk\" (UniqueName: \"kubernetes.io/projected/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-kube-api-access-qjnzk\") pod \"keystone-db-create-zlpv9\" (UID: \"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c\") " pod="openstack/keystone-db-create-zlpv9" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.523965 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-operator-scripts\") pod \"keystone-db-create-zlpv9\" (UID: \"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c\") " pod="openstack/keystone-db-create-zlpv9" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.524022 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxkjh\" (UniqueName: \"kubernetes.io/projected/5258050d-85d9-4593-8d3b-64772e76fcf5-kube-api-access-nxkjh\") pod \"keystone-e870-account-create-update-jgjb6\" (UID: \"5258050d-85d9-4593-8d3b-64772e76fcf5\") " pod="openstack/keystone-e870-account-create-update-jgjb6" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.524073 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5258050d-85d9-4593-8d3b-64772e76fcf5-operator-scripts\") pod \"keystone-e870-account-create-update-jgjb6\" (UID: \"5258050d-85d9-4593-8d3b-64772e76fcf5\") " pod="openstack/keystone-e870-account-create-update-jgjb6" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.525783 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-operator-scripts\") pod \"keystone-db-create-zlpv9\" (UID: \"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c\") " pod="openstack/keystone-db-create-zlpv9" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.546761 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjnzk\" (UniqueName: \"kubernetes.io/projected/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-kube-api-access-qjnzk\") pod \"keystone-db-create-zlpv9\" (UID: \"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c\") " pod="openstack/keystone-db-create-zlpv9" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.625108 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxkjh\" (UniqueName: \"kubernetes.io/projected/5258050d-85d9-4593-8d3b-64772e76fcf5-kube-api-access-nxkjh\") pod \"keystone-e870-account-create-update-jgjb6\" (UID: \"5258050d-85d9-4593-8d3b-64772e76fcf5\") " pod="openstack/keystone-e870-account-create-update-jgjb6" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.625177 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5258050d-85d9-4593-8d3b-64772e76fcf5-operator-scripts\") pod \"keystone-e870-account-create-update-jgjb6\" (UID: \"5258050d-85d9-4593-8d3b-64772e76fcf5\") " pod="openstack/keystone-e870-account-create-update-jgjb6" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.626062 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5258050d-85d9-4593-8d3b-64772e76fcf5-operator-scripts\") pod \"keystone-e870-account-create-update-jgjb6\" (UID: \"5258050d-85d9-4593-8d3b-64772e76fcf5\") " pod="openstack/keystone-e870-account-create-update-jgjb6" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.644691 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxkjh\" (UniqueName: \"kubernetes.io/projected/5258050d-85d9-4593-8d3b-64772e76fcf5-kube-api-access-nxkjh\") pod \"keystone-e870-account-create-update-jgjb6\" (UID: \"5258050d-85d9-4593-8d3b-64772e76fcf5\") " pod="openstack/keystone-e870-account-create-update-jgjb6" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.682080 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zlpv9" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.795412 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e870-account-create-update-jgjb6" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.919788 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8688-account-create-update-c46p5"] Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.922084 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8688-account-create-update-c46p5" Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.931502 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8688-account-create-update-c46p5"] Dec 02 14:03:48 crc kubenswrapper[4625]: I1202 14:03:48.942653 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.011910 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zdssn"] Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.013519 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zdssn" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.029566 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zdssn"] Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.033179 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9xj4\" (UniqueName: \"kubernetes.io/projected/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-kube-api-access-z9xj4\") pod \"placement-8688-account-create-update-c46p5\" (UID: \"bccdd3a9-0e57-475b-a410-d1f6580c1ff5\") " pod="openstack/placement-8688-account-create-update-c46p5" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.033385 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-operator-scripts\") pod \"placement-8688-account-create-update-c46p5\" (UID: \"bccdd3a9-0e57-475b-a410-d1f6580c1ff5\") " pod="openstack/placement-8688-account-create-update-c46p5" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.135964 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3fd0764-6795-48ad-b740-98ee968ba808-operator-scripts\") pod \"placement-db-create-zdssn\" (UID: \"f3fd0764-6795-48ad-b740-98ee968ba808\") " pod="openstack/placement-db-create-zdssn" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.136060 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-operator-scripts\") pod \"placement-8688-account-create-update-c46p5\" (UID: \"bccdd3a9-0e57-475b-a410-d1f6580c1ff5\") " pod="openstack/placement-8688-account-create-update-c46p5" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.136121 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52vqj\" (UniqueName: \"kubernetes.io/projected/f3fd0764-6795-48ad-b740-98ee968ba808-kube-api-access-52vqj\") pod \"placement-db-create-zdssn\" (UID: \"f3fd0764-6795-48ad-b740-98ee968ba808\") " pod="openstack/placement-db-create-zdssn" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.136157 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9xj4\" (UniqueName: \"kubernetes.io/projected/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-kube-api-access-z9xj4\") pod \"placement-8688-account-create-update-c46p5\" (UID: \"bccdd3a9-0e57-475b-a410-d1f6580c1ff5\") " pod="openstack/placement-8688-account-create-update-c46p5" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.137438 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-operator-scripts\") pod \"placement-8688-account-create-update-c46p5\" (UID: \"bccdd3a9-0e57-475b-a410-d1f6580c1ff5\") " pod="openstack/placement-8688-account-create-update-c46p5" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.163080 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9xj4\" (UniqueName: \"kubernetes.io/projected/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-kube-api-access-z9xj4\") pod \"placement-8688-account-create-update-c46p5\" (UID: \"bccdd3a9-0e57-475b-a410-d1f6580c1ff5\") " pod="openstack/placement-8688-account-create-update-c46p5" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.237958 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3fd0764-6795-48ad-b740-98ee968ba808-operator-scripts\") pod \"placement-db-create-zdssn\" (UID: \"f3fd0764-6795-48ad-b740-98ee968ba808\") " pod="openstack/placement-db-create-zdssn" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.238080 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52vqj\" (UniqueName: \"kubernetes.io/projected/f3fd0764-6795-48ad-b740-98ee968ba808-kube-api-access-52vqj\") pod \"placement-db-create-zdssn\" (UID: \"f3fd0764-6795-48ad-b740-98ee968ba808\") " pod="openstack/placement-db-create-zdssn" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.238954 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3fd0764-6795-48ad-b740-98ee968ba808-operator-scripts\") pod \"placement-db-create-zdssn\" (UID: \"f3fd0764-6795-48ad-b740-98ee968ba808\") " pod="openstack/placement-db-create-zdssn" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.262153 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52vqj\" (UniqueName: \"kubernetes.io/projected/f3fd0764-6795-48ad-b740-98ee968ba808-kube-api-access-52vqj\") pod \"placement-db-create-zdssn\" (UID: \"f3fd0764-6795-48ad-b740-98ee968ba808\") " pod="openstack/placement-db-create-zdssn" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.272082 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.272176 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.292231 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8688-account-create-update-c46p5" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.335891 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zdssn" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.798956 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6xqcm"] Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.800480 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.806093 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d2cmn" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.808561 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.810874 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6xqcm"] Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.851604 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-db-sync-config-data\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.852064 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-combined-ca-bundle\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.852239 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-config-data\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.852503 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cps2\" (UniqueName: \"kubernetes.io/projected/314f653d-9ec6-47e4-af2a-aadc2440d332-kube-api-access-5cps2\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.954119 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-db-sync-config-data\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.954175 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-combined-ca-bundle\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.954207 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-config-data\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.954290 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cps2\" (UniqueName: \"kubernetes.io/projected/314f653d-9ec6-47e4-af2a-aadc2440d332-kube-api-access-5cps2\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.961050 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-config-data\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.961220 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-combined-ca-bundle\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.964584 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-db-sync-config-data\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:49 crc kubenswrapper[4625]: I1202 14:03:49.985472 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cps2\" (UniqueName: \"kubernetes.io/projected/314f653d-9ec6-47e4-af2a-aadc2440d332-kube-api-access-5cps2\") pod \"glance-db-sync-6xqcm\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:50 crc kubenswrapper[4625]: I1202 14:03:50.055497 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:50 crc kubenswrapper[4625]: E1202 14:03:50.055776 4625 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:03:50 crc kubenswrapper[4625]: E1202 14:03:50.055833 4625 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:03:50 crc kubenswrapper[4625]: E1202 14:03:50.055955 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift podName:b2953913-1ab3-4821-ab7d-8a20cb58ad90 nodeName:}" failed. No retries permitted until 2025-12-02 14:03:58.055918152 +0000 UTC m=+1194.018095227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift") pod "swift-storage-0" (UID: "b2953913-1ab3-4821-ab7d-8a20cb58ad90") : configmap "swift-ring-files" not found Dec 02 14:03:50 crc kubenswrapper[4625]: I1202 14:03:50.128156 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6xqcm" Dec 02 14:03:51 crc kubenswrapper[4625]: I1202 14:03:51.337574 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:03:51 crc kubenswrapper[4625]: I1202 14:03:51.482933 4625 generic.go:334] "Generic (PLEG): container finished" podID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" containerID="fab8eea7cfc9538032913f923cb15e255e6fdc6b7685be897462dd50245e0a2c" exitCode=0 Dec 02 14:03:51 crc kubenswrapper[4625]: I1202 14:03:51.483450 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a251393-cf48-4d79-8e8d-b46d5e3c664b","Type":"ContainerDied","Data":"fab8eea7cfc9538032913f923cb15e255e6fdc6b7685be897462dd50245e0a2c"} Dec 02 14:03:51 crc kubenswrapper[4625]: I1202 14:03:51.555717 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-b5vwr"] Dec 02 14:03:51 crc kubenswrapper[4625]: I1202 14:03:51.556635 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" podUID="9efe65f2-6105-40f4-a716-c393933036a4" containerName="dnsmasq-dns" containerID="cri-o://620ead4dfd5b4b02d3e7b56e768708855cb2bf508589103e2964120aa402d63c" gracePeriod=10 Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.092522 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zlpv9"] Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.371202 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8688-account-create-update-c46p5"] Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.420460 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zdssn"] Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.471740 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e870-account-create-update-jgjb6"] Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.513300 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zlpv9" event={"ID":"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c","Type":"ContainerStarted","Data":"f53611cf115b159898d3e41c2b657e599411dc0c62f30aef38b861bc2adf337d"} Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.516651 4625 generic.go:334] "Generic (PLEG): container finished" podID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" containerID="2b13cc239360571cb4d7c4f23f1286d1c18c0922b089c9d5b841f4446361788a" exitCode=0 Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.516721 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5","Type":"ContainerDied","Data":"2b13cc239360571cb4d7c4f23f1286d1c18c0922b089c9d5b841f4446361788a"} Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.538492 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9hhrz" event={"ID":"b5858ebe-f677-4a48-b729-a8c4023b346d","Type":"ContainerStarted","Data":"644b48dea754464da4171edc76e73ee04824f237305990429ba9ee9798dddfaa"} Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.551845 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8688-account-create-update-c46p5" event={"ID":"bccdd3a9-0e57-475b-a410-d1f6580c1ff5","Type":"ContainerStarted","Data":"6ef174e4f68d6873bab06bf542456624af1147e1ba2337d0dd0ebcdf7df42d13"} Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.610706 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a251393-cf48-4d79-8e8d-b46d5e3c664b","Type":"ContainerStarted","Data":"47c943a6cbaa463a5ed3297531df1fac01775ca05bc4922c59f86d9b19daf748"} Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.611642 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.638757 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zdssn" event={"ID":"f3fd0764-6795-48ad-b740-98ee968ba808","Type":"ContainerStarted","Data":"5678f64075074cd60a29e5114f5d6ebe8b621a6e17a68aebdc5fb724396b91f1"} Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.656164 4625 generic.go:334] "Generic (PLEG): container finished" podID="9efe65f2-6105-40f4-a716-c393933036a4" containerID="620ead4dfd5b4b02d3e7b56e768708855cb2bf508589103e2964120aa402d63c" exitCode=0 Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.656484 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" event={"ID":"9efe65f2-6105-40f4-a716-c393933036a4","Type":"ContainerDied","Data":"620ead4dfd5b4b02d3e7b56e768708855cb2bf508589103e2964120aa402d63c"} Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.794950 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.513626451 podStartE2EDuration="1m18.794924134s" podCreationTimestamp="2025-12-02 14:02:34 +0000 UTC" firstStartedPulling="2025-12-02 14:02:37.555223464 +0000 UTC m=+1113.517400539" lastFinishedPulling="2025-12-02 14:03:17.836521147 +0000 UTC m=+1153.798698222" observedRunningTime="2025-12-02 14:03:52.696021365 +0000 UTC m=+1188.658198460" watchObservedRunningTime="2025-12-02 14:03:52.794924134 +0000 UTC m=+1188.757101209" Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.795117 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-9hhrz" podStartSLOduration=3.307629844 podStartE2EDuration="10.795112819s" podCreationTimestamp="2025-12-02 14:03:42 +0000 UTC" firstStartedPulling="2025-12-02 14:03:43.518016011 +0000 UTC m=+1179.480193076" lastFinishedPulling="2025-12-02 14:03:51.005498976 +0000 UTC m=+1186.967676051" observedRunningTime="2025-12-02 14:03:52.791972694 +0000 UTC m=+1188.754149769" watchObservedRunningTime="2025-12-02 14:03:52.795112819 +0000 UTC m=+1188.757289894" Dec 02 14:03:52 crc kubenswrapper[4625]: I1202 14:03:52.851337 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6xqcm"] Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.429743 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.570440 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-nb\") pod \"9efe65f2-6105-40f4-a716-c393933036a4\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.570613 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-sb\") pod \"9efe65f2-6105-40f4-a716-c393933036a4\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.570669 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-dns-svc\") pod \"9efe65f2-6105-40f4-a716-c393933036a4\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.570754 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-config\") pod \"9efe65f2-6105-40f4-a716-c393933036a4\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.570846 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd4xv\" (UniqueName: \"kubernetes.io/projected/9efe65f2-6105-40f4-a716-c393933036a4-kube-api-access-wd4xv\") pod \"9efe65f2-6105-40f4-a716-c393933036a4\" (UID: \"9efe65f2-6105-40f4-a716-c393933036a4\") " Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.632784 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9efe65f2-6105-40f4-a716-c393933036a4-kube-api-access-wd4xv" (OuterVolumeSpecName: "kube-api-access-wd4xv") pod "9efe65f2-6105-40f4-a716-c393933036a4" (UID: "9efe65f2-6105-40f4-a716-c393933036a4"). InnerVolumeSpecName "kube-api-access-wd4xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.678405 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd4xv\" (UniqueName: \"kubernetes.io/projected/9efe65f2-6105-40f4-a716-c393933036a4-kube-api-access-wd4xv\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.715340 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.723034 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.723429 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-b5vwr" event={"ID":"9efe65f2-6105-40f4-a716-c393933036a4","Type":"ContainerDied","Data":"93f729977324fa1b84b4373e4158915ab40c9d1e3034db481db30e0b8279c91a"} Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.723479 4625 scope.go:117] "RemoveContainer" containerID="620ead4dfd5b4b02d3e7b56e768708855cb2bf508589103e2964120aa402d63c" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.735887 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9efe65f2-6105-40f4-a716-c393933036a4" (UID: "9efe65f2-6105-40f4-a716-c393933036a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.744887 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9efe65f2-6105-40f4-a716-c393933036a4" (UID: "9efe65f2-6105-40f4-a716-c393933036a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.749559 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5","Type":"ContainerStarted","Data":"391c3655b26c148b6f2b79ad817679e5e3bccbb9beaed46211e837d28f4c8907"} Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.750709 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.758613 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6xqcm" event={"ID":"314f653d-9ec6-47e4-af2a-aadc2440d332","Type":"ContainerStarted","Data":"5e1b5e73f961c74fd875ce0537b62064b3ce336406fb6901fe3325059e9480c5"} Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.766569 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8688-account-create-update-c46p5" event={"ID":"bccdd3a9-0e57-475b-a410-d1f6580c1ff5","Type":"ContainerStarted","Data":"8fbbbc2c13bcd4882f58f5404c9d83abfecbfd7746bc898de7e774fa5448e242"} Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.779000 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zdssn" event={"ID":"f3fd0764-6795-48ad-b740-98ee968ba808","Type":"ContainerStarted","Data":"1223524f711e9be861c19e77223ac5505f483b6dc7b024e4f119f254a905a4b4"} Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.781043 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.781079 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.787938 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e870-account-create-update-jgjb6" event={"ID":"5258050d-85d9-4593-8d3b-64772e76fcf5","Type":"ContainerStarted","Data":"1525f1e12f03bb6ba7ade061fbf58170977326521f898bd37d8d1800ceda939f"} Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.803615 4625 scope.go:117] "RemoveContainer" containerID="a537a2df9660fec4ab4d68b91237ca1b6321c30b931bdb07edd757a7430debdd" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.803910 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bqzbz" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.845129 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-config" (OuterVolumeSpecName: "config") pod "9efe65f2-6105-40f4-a716-c393933036a4" (UID: "9efe65f2-6105-40f4-a716-c393933036a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.885376 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.906577 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.788492634 podStartE2EDuration="1m19.906547238s" podCreationTimestamp="2025-12-02 14:02:34 +0000 UTC" firstStartedPulling="2025-12-02 14:02:36.605097825 +0000 UTC m=+1112.567274900" lastFinishedPulling="2025-12-02 14:03:17.723152429 +0000 UTC m=+1153.685329504" observedRunningTime="2025-12-02 14:03:53.897059512 +0000 UTC m=+1189.859236587" watchObservedRunningTime="2025-12-02 14:03:53.906547238 +0000 UTC m=+1189.868724323" Dec 02 14:03:53 crc kubenswrapper[4625]: I1202 14:03:53.907667 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8688-account-create-update-c46p5" podStartSLOduration=5.907660547 podStartE2EDuration="5.907660547s" podCreationTimestamp="2025-12-02 14:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:53.845627274 +0000 UTC m=+1189.807804359" watchObservedRunningTime="2025-12-02 14:03:53.907660547 +0000 UTC m=+1189.869837622" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.057088 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-zdssn" podStartSLOduration=6.057057567 podStartE2EDuration="6.057057567s" podCreationTimestamp="2025-12-02 14:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:53.96225838 +0000 UTC m=+1189.924435475" watchObservedRunningTime="2025-12-02 14:03:54.057057567 +0000 UTC m=+1190.019234642" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.110856 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9efe65f2-6105-40f4-a716-c393933036a4" (UID: "9efe65f2-6105-40f4-a716-c393933036a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.203177 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efe65f2-6105-40f4-a716-c393933036a4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.396481 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-b5vwr"] Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.416443 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-b5vwr"] Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.490742 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5hzbv-config-6w645"] Dec 02 14:03:54 crc kubenswrapper[4625]: E1202 14:03:54.491192 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9efe65f2-6105-40f4-a716-c393933036a4" containerName="dnsmasq-dns" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.491205 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9efe65f2-6105-40f4-a716-c393933036a4" containerName="dnsmasq-dns" Dec 02 14:03:54 crc kubenswrapper[4625]: E1202 14:03:54.491239 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9efe65f2-6105-40f4-a716-c393933036a4" containerName="init" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.491247 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9efe65f2-6105-40f4-a716-c393933036a4" containerName="init" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.491465 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="9efe65f2-6105-40f4-a716-c393933036a4" containerName="dnsmasq-dns" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.492105 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.505180 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.531819 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5hzbv-config-6w645"] Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.611152 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8n7\" (UniqueName: \"kubernetes.io/projected/cf0ecd02-83a4-43a5-9180-f111f444180d-kube-api-access-dj8n7\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.611778 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.611940 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run-ovn\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.612146 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-log-ovn\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.612204 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-additional-scripts\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.612266 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-scripts\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.714485 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run-ovn\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.714583 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-log-ovn\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.714617 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-additional-scripts\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.714675 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-scripts\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.714737 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj8n7\" (UniqueName: \"kubernetes.io/projected/cf0ecd02-83a4-43a5-9180-f111f444180d-kube-api-access-dj8n7\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.714803 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.715083 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run-ovn\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.715128 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.715193 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-log-ovn\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.716270 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-additional-scripts\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.717587 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-scripts\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.748658 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj8n7\" (UniqueName: \"kubernetes.io/projected/cf0ecd02-83a4-43a5-9180-f111f444180d-kube-api-access-dj8n7\") pod \"ovn-controller-5hzbv-config-6w645\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.813380 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zlpv9" event={"ID":"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c","Type":"ContainerStarted","Data":"4994b4305dd73352b58dbdc7fe58d23779c19a9ef328acb387eb08aa24371813"} Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.818467 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.820453 4625 generic.go:334] "Generic (PLEG): container finished" podID="f3fd0764-6795-48ad-b740-98ee968ba808" containerID="1223524f711e9be861c19e77223ac5505f483b6dc7b024e4f119f254a905a4b4" exitCode=0 Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.820666 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zdssn" event={"ID":"f3fd0764-6795-48ad-b740-98ee968ba808","Type":"ContainerDied","Data":"1223524f711e9be861c19e77223ac5505f483b6dc7b024e4f119f254a905a4b4"} Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.827462 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e870-account-create-update-jgjb6" event={"ID":"5258050d-85d9-4593-8d3b-64772e76fcf5","Type":"ContainerStarted","Data":"c8e01ffecb4fa4d88e000b1e842482431b7a54db23ff08b9840374a40c2be5fe"} Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.855690 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-zlpv9" podStartSLOduration=6.8556646390000004 podStartE2EDuration="6.855664639s" podCreationTimestamp="2025-12-02 14:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:54.850699005 +0000 UTC m=+1190.812876080" watchObservedRunningTime="2025-12-02 14:03:54.855664639 +0000 UTC m=+1190.817841714" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.869329 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9efe65f2-6105-40f4-a716-c393933036a4" path="/var/lib/kubelet/pods/9efe65f2-6105-40f4-a716-c393933036a4/volumes" Dec 02 14:03:54 crc kubenswrapper[4625]: I1202 14:03:54.938165 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-e870-account-create-update-jgjb6" podStartSLOduration=6.938140773 podStartE2EDuration="6.938140773s" podCreationTimestamp="2025-12-02 14:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:54.936943962 +0000 UTC m=+1190.899121037" watchObservedRunningTime="2025-12-02 14:03:54.938140773 +0000 UTC m=+1190.900317848" Dec 02 14:03:55 crc kubenswrapper[4625]: I1202 14:03:55.650509 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5hzbv-config-6w645"] Dec 02 14:03:55 crc kubenswrapper[4625]: W1202 14:03:55.696943 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf0ecd02_83a4_43a5_9180_f111f444180d.slice/crio-a560d7ec93d92c434f7e55e8eacd06f73098d7035481f8c84e8c5442bbad71fc WatchSource:0}: Error finding container a560d7ec93d92c434f7e55e8eacd06f73098d7035481f8c84e8c5442bbad71fc: Status 404 returned error can't find the container with id a560d7ec93d92c434f7e55e8eacd06f73098d7035481f8c84e8c5442bbad71fc Dec 02 14:03:55 crc kubenswrapper[4625]: I1202 14:03:55.849156 4625 generic.go:334] "Generic (PLEG): container finished" podID="b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c" containerID="4994b4305dd73352b58dbdc7fe58d23779c19a9ef328acb387eb08aa24371813" exitCode=0 Dec 02 14:03:55 crc kubenswrapper[4625]: I1202 14:03:55.849355 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zlpv9" event={"ID":"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c","Type":"ContainerDied","Data":"4994b4305dd73352b58dbdc7fe58d23779c19a9ef328acb387eb08aa24371813"} Dec 02 14:03:55 crc kubenswrapper[4625]: I1202 14:03:55.854607 4625 generic.go:334] "Generic (PLEG): container finished" podID="bccdd3a9-0e57-475b-a410-d1f6580c1ff5" containerID="8fbbbc2c13bcd4882f58f5404c9d83abfecbfd7746bc898de7e774fa5448e242" exitCode=0 Dec 02 14:03:55 crc kubenswrapper[4625]: I1202 14:03:55.854676 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8688-account-create-update-c46p5" event={"ID":"bccdd3a9-0e57-475b-a410-d1f6580c1ff5","Type":"ContainerDied","Data":"8fbbbc2c13bcd4882f58f5404c9d83abfecbfd7746bc898de7e774fa5448e242"} Dec 02 14:03:55 crc kubenswrapper[4625]: I1202 14:03:55.856926 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5hzbv-config-6w645" event={"ID":"cf0ecd02-83a4-43a5-9180-f111f444180d","Type":"ContainerStarted","Data":"a560d7ec93d92c434f7e55e8eacd06f73098d7035481f8c84e8c5442bbad71fc"} Dec 02 14:03:55 crc kubenswrapper[4625]: I1202 14:03:55.860767 4625 generic.go:334] "Generic (PLEG): container finished" podID="5258050d-85d9-4593-8d3b-64772e76fcf5" containerID="c8e01ffecb4fa4d88e000b1e842482431b7a54db23ff08b9840374a40c2be5fe" exitCode=0 Dec 02 14:03:55 crc kubenswrapper[4625]: I1202 14:03:55.860832 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e870-account-create-update-jgjb6" event={"ID":"5258050d-85d9-4593-8d3b-64772e76fcf5","Type":"ContainerDied","Data":"c8e01ffecb4fa4d88e000b1e842482431b7a54db23ff08b9840374a40c2be5fe"} Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.496350 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zdssn" Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.563344 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52vqj\" (UniqueName: \"kubernetes.io/projected/f3fd0764-6795-48ad-b740-98ee968ba808-kube-api-access-52vqj\") pod \"f3fd0764-6795-48ad-b740-98ee968ba808\" (UID: \"f3fd0764-6795-48ad-b740-98ee968ba808\") " Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.563746 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3fd0764-6795-48ad-b740-98ee968ba808-operator-scripts\") pod \"f3fd0764-6795-48ad-b740-98ee968ba808\" (UID: \"f3fd0764-6795-48ad-b740-98ee968ba808\") " Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.564962 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fd0764-6795-48ad-b740-98ee968ba808-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3fd0764-6795-48ad-b740-98ee968ba808" (UID: "f3fd0764-6795-48ad-b740-98ee968ba808"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.587678 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fd0764-6795-48ad-b740-98ee968ba808-kube-api-access-52vqj" (OuterVolumeSpecName: "kube-api-access-52vqj") pod "f3fd0764-6795-48ad-b740-98ee968ba808" (UID: "f3fd0764-6795-48ad-b740-98ee968ba808"). InnerVolumeSpecName "kube-api-access-52vqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.666762 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52vqj\" (UniqueName: \"kubernetes.io/projected/f3fd0764-6795-48ad-b740-98ee968ba808-kube-api-access-52vqj\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.668817 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3fd0764-6795-48ad-b740-98ee968ba808-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.874096 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5hzbv-config-6w645" event={"ID":"cf0ecd02-83a4-43a5-9180-f111f444180d","Type":"ContainerStarted","Data":"74423cd7136a03916398e9f5880e9c2c9d458ff77d53254c811315bc8aadc6f2"} Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.881187 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zdssn" event={"ID":"f3fd0764-6795-48ad-b740-98ee968ba808","Type":"ContainerDied","Data":"5678f64075074cd60a29e5114f5d6ebe8b621a6e17a68aebdc5fb724396b91f1"} Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.881293 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5678f64075074cd60a29e5114f5d6ebe8b621a6e17a68aebdc5fb724396b91f1" Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.881451 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zdssn" Dec 02 14:03:56 crc kubenswrapper[4625]: I1202 14:03:56.927812 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5hzbv-config-6w645" podStartSLOduration=2.927790281 podStartE2EDuration="2.927790281s" podCreationTimestamp="2025-12-02 14:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:03:56.917667178 +0000 UTC m=+1192.879844253" watchObservedRunningTime="2025-12-02 14:03:56.927790281 +0000 UTC m=+1192.889967346" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.548645 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8688-account-create-update-c46p5" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.595429 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-operator-scripts\") pod \"bccdd3a9-0e57-475b-a410-d1f6580c1ff5\" (UID: \"bccdd3a9-0e57-475b-a410-d1f6580c1ff5\") " Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.595505 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9xj4\" (UniqueName: \"kubernetes.io/projected/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-kube-api-access-z9xj4\") pod \"bccdd3a9-0e57-475b-a410-d1f6580c1ff5\" (UID: \"bccdd3a9-0e57-475b-a410-d1f6580c1ff5\") " Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.597045 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bccdd3a9-0e57-475b-a410-d1f6580c1ff5" (UID: "bccdd3a9-0e57-475b-a410-d1f6580c1ff5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.627534 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-kube-api-access-z9xj4" (OuterVolumeSpecName: "kube-api-access-z9xj4") pod "bccdd3a9-0e57-475b-a410-d1f6580c1ff5" (UID: "bccdd3a9-0e57-475b-a410-d1f6580c1ff5"). InnerVolumeSpecName "kube-api-access-z9xj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.711500 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.711542 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9xj4\" (UniqueName: \"kubernetes.io/projected/bccdd3a9-0e57-475b-a410-d1f6580c1ff5-kube-api-access-z9xj4\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.885239 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zlpv9" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.914353 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjnzk\" (UniqueName: \"kubernetes.io/projected/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-kube-api-access-qjnzk\") pod \"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c\" (UID: \"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c\") " Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.914568 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-operator-scripts\") pod \"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c\" (UID: \"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c\") " Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.917046 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c" (UID: "b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.924579 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-kube-api-access-qjnzk" (OuterVolumeSpecName: "kube-api-access-qjnzk") pod "b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c" (UID: "b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c"). InnerVolumeSpecName "kube-api-access-qjnzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.931979 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8688-account-create-update-c46p5" event={"ID":"bccdd3a9-0e57-475b-a410-d1f6580c1ff5","Type":"ContainerDied","Data":"6ef174e4f68d6873bab06bf542456624af1147e1ba2337d0dd0ebcdf7df42d13"} Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.932046 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ef174e4f68d6873bab06bf542456624af1147e1ba2337d0dd0ebcdf7df42d13" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.932131 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8688-account-create-update-c46p5" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.951128 4625 generic.go:334] "Generic (PLEG): container finished" podID="cf0ecd02-83a4-43a5-9180-f111f444180d" containerID="74423cd7136a03916398e9f5880e9c2c9d458ff77d53254c811315bc8aadc6f2" exitCode=0 Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.951218 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5hzbv-config-6w645" event={"ID":"cf0ecd02-83a4-43a5-9180-f111f444180d","Type":"ContainerDied","Data":"74423cd7136a03916398e9f5880e9c2c9d458ff77d53254c811315bc8aadc6f2"} Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.963563 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zlpv9" event={"ID":"b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c","Type":"ContainerDied","Data":"f53611cf115b159898d3e41c2b657e599411dc0c62f30aef38b861bc2adf337d"} Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.963617 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f53611cf115b159898d3e41c2b657e599411dc0c62f30aef38b861bc2adf337d" Dec 02 14:03:57 crc kubenswrapper[4625]: I1202 14:03:57.963684 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zlpv9" Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.015951 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.016006 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjnzk\" (UniqueName: \"kubernetes.io/projected/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c-kube-api-access-qjnzk\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.087906 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e870-account-create-update-jgjb6" Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.117292 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5258050d-85d9-4593-8d3b-64772e76fcf5-operator-scripts\") pod \"5258050d-85d9-4593-8d3b-64772e76fcf5\" (UID: \"5258050d-85d9-4593-8d3b-64772e76fcf5\") " Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.117626 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:03:58 crc kubenswrapper[4625]: E1202 14:03:58.117902 4625 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 02 14:03:58 crc kubenswrapper[4625]: E1202 14:03:58.117929 4625 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 02 14:03:58 crc kubenswrapper[4625]: E1202 14:03:58.117988 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift podName:b2953913-1ab3-4821-ab7d-8a20cb58ad90 nodeName:}" failed. No retries permitted until 2025-12-02 14:04:14.117968705 +0000 UTC m=+1210.080145770 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift") pod "swift-storage-0" (UID: "b2953913-1ab3-4821-ab7d-8a20cb58ad90") : configmap "swift-ring-files" not found Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.118855 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5258050d-85d9-4593-8d3b-64772e76fcf5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5258050d-85d9-4593-8d3b-64772e76fcf5" (UID: "5258050d-85d9-4593-8d3b-64772e76fcf5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.219811 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxkjh\" (UniqueName: \"kubernetes.io/projected/5258050d-85d9-4593-8d3b-64772e76fcf5-kube-api-access-nxkjh\") pod \"5258050d-85d9-4593-8d3b-64772e76fcf5\" (UID: \"5258050d-85d9-4593-8d3b-64772e76fcf5\") " Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.220259 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5258050d-85d9-4593-8d3b-64772e76fcf5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.226322 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5258050d-85d9-4593-8d3b-64772e76fcf5-kube-api-access-nxkjh" (OuterVolumeSpecName: "kube-api-access-nxkjh") pod "5258050d-85d9-4593-8d3b-64772e76fcf5" (UID: "5258050d-85d9-4593-8d3b-64772e76fcf5"). InnerVolumeSpecName "kube-api-access-nxkjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.322793 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxkjh\" (UniqueName: \"kubernetes.io/projected/5258050d-85d9-4593-8d3b-64772e76fcf5-kube-api-access-nxkjh\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.462644 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5hzbv" Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.982071 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e870-account-create-update-jgjb6" Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.983413 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e870-account-create-update-jgjb6" event={"ID":"5258050d-85d9-4593-8d3b-64772e76fcf5","Type":"ContainerDied","Data":"1525f1e12f03bb6ba7ade061fbf58170977326521f898bd37d8d1800ceda939f"} Dec 02 14:03:58 crc kubenswrapper[4625]: I1202 14:03:58.983474 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1525f1e12f03bb6ba7ade061fbf58170977326521f898bd37d8d1800ceda939f" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.447591 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.450741 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-log-ovn\") pod \"cf0ecd02-83a4-43a5-9180-f111f444180d\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.450847 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "cf0ecd02-83a4-43a5-9180-f111f444180d" (UID: "cf0ecd02-83a4-43a5-9180-f111f444180d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.451215 4625 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.552424 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run\") pod \"cf0ecd02-83a4-43a5-9180-f111f444180d\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.552517 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run-ovn\") pod \"cf0ecd02-83a4-43a5-9180-f111f444180d\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.552777 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-scripts\") pod \"cf0ecd02-83a4-43a5-9180-f111f444180d\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.552822 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj8n7\" (UniqueName: \"kubernetes.io/projected/cf0ecd02-83a4-43a5-9180-f111f444180d-kube-api-access-dj8n7\") pod \"cf0ecd02-83a4-43a5-9180-f111f444180d\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.552894 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-additional-scripts\") pod \"cf0ecd02-83a4-43a5-9180-f111f444180d\" (UID: \"cf0ecd02-83a4-43a5-9180-f111f444180d\") " Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.553058 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "cf0ecd02-83a4-43a5-9180-f111f444180d" (UID: "cf0ecd02-83a4-43a5-9180-f111f444180d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.553177 4625 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.553210 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run" (OuterVolumeSpecName: "var-run") pod "cf0ecd02-83a4-43a5-9180-f111f444180d" (UID: "cf0ecd02-83a4-43a5-9180-f111f444180d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.554182 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "cf0ecd02-83a4-43a5-9180-f111f444180d" (UID: "cf0ecd02-83a4-43a5-9180-f111f444180d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.555174 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-scripts" (OuterVolumeSpecName: "scripts") pod "cf0ecd02-83a4-43a5-9180-f111f444180d" (UID: "cf0ecd02-83a4-43a5-9180-f111f444180d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.588802 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0ecd02-83a4-43a5-9180-f111f444180d-kube-api-access-dj8n7" (OuterVolumeSpecName: "kube-api-access-dj8n7") pod "cf0ecd02-83a4-43a5-9180-f111f444180d" (UID: "cf0ecd02-83a4-43a5-9180-f111f444180d"). InnerVolumeSpecName "kube-api-access-dj8n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.654661 4625 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cf0ecd02-83a4-43a5-9180-f111f444180d-var-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.654700 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.654712 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj8n7\" (UniqueName: \"kubernetes.io/projected/cf0ecd02-83a4-43a5-9180-f111f444180d-kube-api-access-dj8n7\") on node \"crc\" DevicePath \"\"" Dec 02 14:03:59 crc kubenswrapper[4625]: I1202 14:03:59.654721 4625 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/cf0ecd02-83a4-43a5-9180-f111f444180d-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:00 crc kubenswrapper[4625]: I1202 14:04:00.006738 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5hzbv-config-6w645" event={"ID":"cf0ecd02-83a4-43a5-9180-f111f444180d","Type":"ContainerDied","Data":"a560d7ec93d92c434f7e55e8eacd06f73098d7035481f8c84e8c5442bbad71fc"} Dec 02 14:04:00 crc kubenswrapper[4625]: I1202 14:04:00.006794 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a560d7ec93d92c434f7e55e8eacd06f73098d7035481f8c84e8c5442bbad71fc" Dec 02 14:04:00 crc kubenswrapper[4625]: I1202 14:04:00.006888 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5hzbv-config-6w645" Dec 02 14:04:00 crc kubenswrapper[4625]: I1202 14:04:00.125777 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5hzbv-config-6w645"] Dec 02 14:04:00 crc kubenswrapper[4625]: I1202 14:04:00.139358 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5hzbv-config-6w645"] Dec 02 14:04:00 crc kubenswrapper[4625]: I1202 14:04:00.872110 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf0ecd02-83a4-43a5-9180-f111f444180d" path="/var/lib/kubelet/pods/cf0ecd02-83a4-43a5-9180-f111f444180d/volumes" Dec 02 14:04:05 crc kubenswrapper[4625]: I1202 14:04:05.671542 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.272458 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4jlmz"] Dec 02 14:04:06 crc kubenswrapper[4625]: E1202 14:04:06.273386 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5258050d-85d9-4593-8d3b-64772e76fcf5" containerName="mariadb-account-create-update" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.273407 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="5258050d-85d9-4593-8d3b-64772e76fcf5" containerName="mariadb-account-create-update" Dec 02 14:04:06 crc kubenswrapper[4625]: E1202 14:04:06.273417 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c" containerName="mariadb-database-create" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.273425 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c" containerName="mariadb-database-create" Dec 02 14:04:06 crc kubenswrapper[4625]: E1202 14:04:06.273444 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccdd3a9-0e57-475b-a410-d1f6580c1ff5" containerName="mariadb-account-create-update" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.273451 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccdd3a9-0e57-475b-a410-d1f6580c1ff5" containerName="mariadb-account-create-update" Dec 02 14:04:06 crc kubenswrapper[4625]: E1202 14:04:06.273467 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fd0764-6795-48ad-b740-98ee968ba808" containerName="mariadb-database-create" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.273473 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fd0764-6795-48ad-b740-98ee968ba808" containerName="mariadb-database-create" Dec 02 14:04:06 crc kubenswrapper[4625]: E1202 14:04:06.273490 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0ecd02-83a4-43a5-9180-f111f444180d" containerName="ovn-config" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.273497 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0ecd02-83a4-43a5-9180-f111f444180d" containerName="ovn-config" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.278760 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0ecd02-83a4-43a5-9180-f111f444180d" containerName="ovn-config" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.278806 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="5258050d-85d9-4593-8d3b-64772e76fcf5" containerName="mariadb-account-create-update" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.278823 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fd0764-6795-48ad-b740-98ee968ba808" containerName="mariadb-database-create" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.278840 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="bccdd3a9-0e57-475b-a410-d1f6580c1ff5" containerName="mariadb-account-create-update" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.278856 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c" containerName="mariadb-database-create" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.288438 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4jlmz" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.317040 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mttdf\" (UniqueName: \"kubernetes.io/projected/be753d78-885f-4972-9174-039b19cf978e-kube-api-access-mttdf\") pod \"cinder-db-create-4jlmz\" (UID: \"be753d78-885f-4972-9174-039b19cf978e\") " pod="openstack/cinder-db-create-4jlmz" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.317235 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be753d78-885f-4972-9174-039b19cf978e-operator-scripts\") pod \"cinder-db-create-4jlmz\" (UID: \"be753d78-885f-4972-9174-039b19cf978e\") " pod="openstack/cinder-db-create-4jlmz" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.392632 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4jlmz"] Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.395002 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.422747 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mttdf\" (UniqueName: \"kubernetes.io/projected/be753d78-885f-4972-9174-039b19cf978e-kube-api-access-mttdf\") pod \"cinder-db-create-4jlmz\" (UID: \"be753d78-885f-4972-9174-039b19cf978e\") " pod="openstack/cinder-db-create-4jlmz" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.422838 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be753d78-885f-4972-9174-039b19cf978e-operator-scripts\") pod \"cinder-db-create-4jlmz\" (UID: \"be753d78-885f-4972-9174-039b19cf978e\") " pod="openstack/cinder-db-create-4jlmz" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.423853 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be753d78-885f-4972-9174-039b19cf978e-operator-scripts\") pod \"cinder-db-create-4jlmz\" (UID: \"be753d78-885f-4972-9174-039b19cf978e\") " pod="openstack/cinder-db-create-4jlmz" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.461593 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceca-account-create-update-dhx9s"] Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.463710 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceca-account-create-update-dhx9s" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.487689 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.546595 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceca-account-create-update-dhx9s"] Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.556136 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mttdf\" (UniqueName: \"kubernetes.io/projected/be753d78-885f-4972-9174-039b19cf978e-kube-api-access-mttdf\") pod \"cinder-db-create-4jlmz\" (UID: \"be753d78-885f-4972-9174-039b19cf978e\") " pod="openstack/cinder-db-create-4jlmz" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.576814 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-29vvf"] Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.584830 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29vvf" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.612389 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4jlmz" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.627161 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4349bf5c-375b-45aa-b845-70ea55a35bf9-operator-scripts\") pod \"cinder-ceca-account-create-update-dhx9s\" (UID: \"4349bf5c-375b-45aa-b845-70ea55a35bf9\") " pod="openstack/cinder-ceca-account-create-update-dhx9s" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.627230 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d506dcc8-4877-472a-ad57-f88d656e84f3-operator-scripts\") pod \"barbican-db-create-29vvf\" (UID: \"d506dcc8-4877-472a-ad57-f88d656e84f3\") " pod="openstack/barbican-db-create-29vvf" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.627358 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvr2\" (UniqueName: \"kubernetes.io/projected/d506dcc8-4877-472a-ad57-f88d656e84f3-kube-api-access-lzvr2\") pod \"barbican-db-create-29vvf\" (UID: \"d506dcc8-4877-472a-ad57-f88d656e84f3\") " pod="openstack/barbican-db-create-29vvf" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.627379 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlvm\" (UniqueName: \"kubernetes.io/projected/4349bf5c-375b-45aa-b845-70ea55a35bf9-kube-api-access-pmlvm\") pod \"cinder-ceca-account-create-update-dhx9s\" (UID: \"4349bf5c-375b-45aa-b845-70ea55a35bf9\") " pod="openstack/cinder-ceca-account-create-update-dhx9s" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.658218 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-29vvf"] Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.729409 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzvr2\" (UniqueName: \"kubernetes.io/projected/d506dcc8-4877-472a-ad57-f88d656e84f3-kube-api-access-lzvr2\") pod \"barbican-db-create-29vvf\" (UID: \"d506dcc8-4877-472a-ad57-f88d656e84f3\") " pod="openstack/barbican-db-create-29vvf" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.729962 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlvm\" (UniqueName: \"kubernetes.io/projected/4349bf5c-375b-45aa-b845-70ea55a35bf9-kube-api-access-pmlvm\") pod \"cinder-ceca-account-create-update-dhx9s\" (UID: \"4349bf5c-375b-45aa-b845-70ea55a35bf9\") " pod="openstack/cinder-ceca-account-create-update-dhx9s" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.730067 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4349bf5c-375b-45aa-b845-70ea55a35bf9-operator-scripts\") pod \"cinder-ceca-account-create-update-dhx9s\" (UID: \"4349bf5c-375b-45aa-b845-70ea55a35bf9\") " pod="openstack/cinder-ceca-account-create-update-dhx9s" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.730106 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d506dcc8-4877-472a-ad57-f88d656e84f3-operator-scripts\") pod \"barbican-db-create-29vvf\" (UID: \"d506dcc8-4877-472a-ad57-f88d656e84f3\") " pod="openstack/barbican-db-create-29vvf" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.731101 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d506dcc8-4877-472a-ad57-f88d656e84f3-operator-scripts\") pod \"barbican-db-create-29vvf\" (UID: \"d506dcc8-4877-472a-ad57-f88d656e84f3\") " pod="openstack/barbican-db-create-29vvf" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.732284 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4349bf5c-375b-45aa-b845-70ea55a35bf9-operator-scripts\") pod \"cinder-ceca-account-create-update-dhx9s\" (UID: \"4349bf5c-375b-45aa-b845-70ea55a35bf9\") " pod="openstack/cinder-ceca-account-create-update-dhx9s" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.795360 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlvm\" (UniqueName: \"kubernetes.io/projected/4349bf5c-375b-45aa-b845-70ea55a35bf9-kube-api-access-pmlvm\") pod \"cinder-ceca-account-create-update-dhx9s\" (UID: \"4349bf5c-375b-45aa-b845-70ea55a35bf9\") " pod="openstack/cinder-ceca-account-create-update-dhx9s" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.805348 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzvr2\" (UniqueName: \"kubernetes.io/projected/d506dcc8-4877-472a-ad57-f88d656e84f3-kube-api-access-lzvr2\") pod \"barbican-db-create-29vvf\" (UID: \"d506dcc8-4877-472a-ad57-f88d656e84f3\") " pod="openstack/barbican-db-create-29vvf" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.925596 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29vvf" Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.988901 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-556e-account-create-update-wgkgf"] Dec 02 14:04:06 crc kubenswrapper[4625]: I1202 14:04:06.990826 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-556e-account-create-update-wgkgf" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.001770 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.037688 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ljc\" (UniqueName: \"kubernetes.io/projected/28e8e120-7dce-4848-ad79-d3b0ad33a778-kube-api-access-v2ljc\") pod \"barbican-556e-account-create-update-wgkgf\" (UID: \"28e8e120-7dce-4848-ad79-d3b0ad33a778\") " pod="openstack/barbican-556e-account-create-update-wgkgf" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.045252 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28e8e120-7dce-4848-ad79-d3b0ad33a778-operator-scripts\") pod \"barbican-556e-account-create-update-wgkgf\" (UID: \"28e8e120-7dce-4848-ad79-d3b0ad33a778\") " pod="openstack/barbican-556e-account-create-update-wgkgf" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.090642 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceca-account-create-update-dhx9s" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.131164 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-556e-account-create-update-wgkgf"] Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.148346 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2ljc\" (UniqueName: \"kubernetes.io/projected/28e8e120-7dce-4848-ad79-d3b0ad33a778-kube-api-access-v2ljc\") pod \"barbican-556e-account-create-update-wgkgf\" (UID: \"28e8e120-7dce-4848-ad79-d3b0ad33a778\") " pod="openstack/barbican-556e-account-create-update-wgkgf" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.148450 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28e8e120-7dce-4848-ad79-d3b0ad33a778-operator-scripts\") pod \"barbican-556e-account-create-update-wgkgf\" (UID: \"28e8e120-7dce-4848-ad79-d3b0ad33a778\") " pod="openstack/barbican-556e-account-create-update-wgkgf" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.149391 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28e8e120-7dce-4848-ad79-d3b0ad33a778-operator-scripts\") pod \"barbican-556e-account-create-update-wgkgf\" (UID: \"28e8e120-7dce-4848-ad79-d3b0ad33a778\") " pod="openstack/barbican-556e-account-create-update-wgkgf" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.170013 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vdstw"] Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.171574 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vdstw" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.186561 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vdstw"] Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.226213 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2ljc\" (UniqueName: \"kubernetes.io/projected/28e8e120-7dce-4848-ad79-d3b0ad33a778-kube-api-access-v2ljc\") pod \"barbican-556e-account-create-update-wgkgf\" (UID: \"28e8e120-7dce-4848-ad79-d3b0ad33a778\") " pod="openstack/barbican-556e-account-create-update-wgkgf" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.253017 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7p8q\" (UniqueName: \"kubernetes.io/projected/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-kube-api-access-c7p8q\") pod \"neutron-db-create-vdstw\" (UID: \"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe\") " pod="openstack/neutron-db-create-vdstw" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.254049 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-operator-scripts\") pod \"neutron-db-create-vdstw\" (UID: \"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe\") " pod="openstack/neutron-db-create-vdstw" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.326805 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-556e-account-create-update-wgkgf" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.356860 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7p8q\" (UniqueName: \"kubernetes.io/projected/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-kube-api-access-c7p8q\") pod \"neutron-db-create-vdstw\" (UID: \"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe\") " pod="openstack/neutron-db-create-vdstw" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.356981 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-operator-scripts\") pod \"neutron-db-create-vdstw\" (UID: \"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe\") " pod="openstack/neutron-db-create-vdstw" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.357817 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-operator-scripts\") pod \"neutron-db-create-vdstw\" (UID: \"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe\") " pod="openstack/neutron-db-create-vdstw" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.364930 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-flqpl"] Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.369752 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.408800 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.414714 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.414898 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v2szz" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.415335 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.448153 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7p8q\" (UniqueName: \"kubernetes.io/projected/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-kube-api-access-c7p8q\") pod \"neutron-db-create-vdstw\" (UID: \"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe\") " pod="openstack/neutron-db-create-vdstw" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.462304 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-config-data\") pod \"keystone-db-sync-flqpl\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.462623 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-combined-ca-bundle\") pod \"keystone-db-sync-flqpl\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.463275 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwcd9\" (UniqueName: \"kubernetes.io/projected/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-kube-api-access-fwcd9\") pod \"keystone-db-sync-flqpl\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.467581 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-flqpl"] Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.486572 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2866-account-create-update-4f925"] Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.491204 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2866-account-create-update-4f925" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.501995 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.525813 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vdstw" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.571240 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwcd9\" (UniqueName: \"kubernetes.io/projected/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-kube-api-access-fwcd9\") pod \"keystone-db-sync-flqpl\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.571334 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvzl\" (UniqueName: \"kubernetes.io/projected/cab9b0dc-588c-4c86-a1de-25e3c71dce53-kube-api-access-vpvzl\") pod \"neutron-2866-account-create-update-4f925\" (UID: \"cab9b0dc-588c-4c86-a1de-25e3c71dce53\") " pod="openstack/neutron-2866-account-create-update-4f925" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.571403 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab9b0dc-588c-4c86-a1de-25e3c71dce53-operator-scripts\") pod \"neutron-2866-account-create-update-4f925\" (UID: \"cab9b0dc-588c-4c86-a1de-25e3c71dce53\") " pod="openstack/neutron-2866-account-create-update-4f925" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.571446 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-config-data\") pod \"keystone-db-sync-flqpl\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.571494 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-combined-ca-bundle\") pod \"keystone-db-sync-flqpl\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.582369 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-config-data\") pod \"keystone-db-sync-flqpl\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.599599 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2866-account-create-update-4f925"] Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.606408 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-combined-ca-bundle\") pod \"keystone-db-sync-flqpl\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.628356 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwcd9\" (UniqueName: \"kubernetes.io/projected/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-kube-api-access-fwcd9\") pod \"keystone-db-sync-flqpl\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.674058 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpvzl\" (UniqueName: \"kubernetes.io/projected/cab9b0dc-588c-4c86-a1de-25e3c71dce53-kube-api-access-vpvzl\") pod \"neutron-2866-account-create-update-4f925\" (UID: \"cab9b0dc-588c-4c86-a1de-25e3c71dce53\") " pod="openstack/neutron-2866-account-create-update-4f925" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.674651 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab9b0dc-588c-4c86-a1de-25e3c71dce53-operator-scripts\") pod \"neutron-2866-account-create-update-4f925\" (UID: \"cab9b0dc-588c-4c86-a1de-25e3c71dce53\") " pod="openstack/neutron-2866-account-create-update-4f925" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.675580 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab9b0dc-588c-4c86-a1de-25e3c71dce53-operator-scripts\") pod \"neutron-2866-account-create-update-4f925\" (UID: \"cab9b0dc-588c-4c86-a1de-25e3c71dce53\") " pod="openstack/neutron-2866-account-create-update-4f925" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.705714 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.706852 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpvzl\" (UniqueName: \"kubernetes.io/projected/cab9b0dc-588c-4c86-a1de-25e3c71dce53-kube-api-access-vpvzl\") pod \"neutron-2866-account-create-update-4f925\" (UID: \"cab9b0dc-588c-4c86-a1de-25e3c71dce53\") " pod="openstack/neutron-2866-account-create-update-4f925" Dec 02 14:04:07 crc kubenswrapper[4625]: I1202 14:04:07.814648 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2866-account-create-update-4f925" Dec 02 14:04:09 crc kubenswrapper[4625]: I1202 14:04:09.129842 4625 generic.go:334] "Generic (PLEG): container finished" podID="b5858ebe-f677-4a48-b729-a8c4023b346d" containerID="644b48dea754464da4171edc76e73ee04824f237305990429ba9ee9798dddfaa" exitCode=0 Dec 02 14:04:09 crc kubenswrapper[4625]: I1202 14:04:09.129966 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9hhrz" event={"ID":"b5858ebe-f677-4a48-b729-a8c4023b346d","Type":"ContainerDied","Data":"644b48dea754464da4171edc76e73ee04824f237305990429ba9ee9798dddfaa"} Dec 02 14:04:14 crc kubenswrapper[4625]: I1202 14:04:14.133227 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:04:14 crc kubenswrapper[4625]: I1202 14:04:14.147137 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2953913-1ab3-4821-ab7d-8a20cb58ad90-etc-swift\") pod \"swift-storage-0\" (UID: \"b2953913-1ab3-4821-ab7d-8a20cb58ad90\") " pod="openstack/swift-storage-0" Dec 02 14:04:14 crc kubenswrapper[4625]: I1202 14:04:14.289680 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 02 14:04:18 crc kubenswrapper[4625]: E1202 14:04:18.747614 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 02 14:04:18 crc kubenswrapper[4625]: E1202 14:04:18.748853 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cps2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-6xqcm_openstack(314f653d-9ec6-47e4-af2a-aadc2440d332): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:04:18 crc kubenswrapper[4625]: E1202 14:04:18.750094 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-6xqcm" podUID="314f653d-9ec6-47e4-af2a-aadc2440d332" Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.798784 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.847088 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-ring-data-devices\") pod \"b5858ebe-f677-4a48-b729-a8c4023b346d\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.847202 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-swiftconf\") pod \"b5858ebe-f677-4a48-b729-a8c4023b346d\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.847325 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5858ebe-f677-4a48-b729-a8c4023b346d-etc-swift\") pod \"b5858ebe-f677-4a48-b729-a8c4023b346d\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.847353 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ggn8\" (UniqueName: \"kubernetes.io/projected/b5858ebe-f677-4a48-b729-a8c4023b346d-kube-api-access-4ggn8\") pod \"b5858ebe-f677-4a48-b729-a8c4023b346d\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.847396 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-combined-ca-bundle\") pod \"b5858ebe-f677-4a48-b729-a8c4023b346d\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.847440 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-dispersionconf\") pod \"b5858ebe-f677-4a48-b729-a8c4023b346d\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.847604 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-scripts\") pod \"b5858ebe-f677-4a48-b729-a8c4023b346d\" (UID: \"b5858ebe-f677-4a48-b729-a8c4023b346d\") " Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.855176 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b5858ebe-f677-4a48-b729-a8c4023b346d" (UID: "b5858ebe-f677-4a48-b729-a8c4023b346d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.867067 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5858ebe-f677-4a48-b729-a8c4023b346d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b5858ebe-f677-4a48-b729-a8c4023b346d" (UID: "b5858ebe-f677-4a48-b729-a8c4023b346d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.957409 4625 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b5858ebe-f677-4a48-b729-a8c4023b346d-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.957445 4625 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:18 crc kubenswrapper[4625]: I1202 14:04:18.969744 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5858ebe-f677-4a48-b729-a8c4023b346d-kube-api-access-4ggn8" (OuterVolumeSpecName: "kube-api-access-4ggn8") pod "b5858ebe-f677-4a48-b729-a8c4023b346d" (UID: "b5858ebe-f677-4a48-b729-a8c4023b346d"). InnerVolumeSpecName "kube-api-access-4ggn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.024632 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b5858ebe-f677-4a48-b729-a8c4023b346d" (UID: "b5858ebe-f677-4a48-b729-a8c4023b346d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.059220 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-scripts" (OuterVolumeSpecName: "scripts") pod "b5858ebe-f677-4a48-b729-a8c4023b346d" (UID: "b5858ebe-f677-4a48-b729-a8c4023b346d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.073173 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ggn8\" (UniqueName: \"kubernetes.io/projected/b5858ebe-f677-4a48-b729-a8c4023b346d-kube-api-access-4ggn8\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.073226 4625 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.073238 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5858ebe-f677-4a48-b729-a8c4023b346d-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.101057 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b5858ebe-f677-4a48-b729-a8c4023b346d" (UID: "b5858ebe-f677-4a48-b729-a8c4023b346d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.160491 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5858ebe-f677-4a48-b729-a8c4023b346d" (UID: "b5858ebe-f677-4a48-b729-a8c4023b346d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.174987 4625 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.175047 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5858ebe-f677-4a48-b729-a8c4023b346d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.274469 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.274539 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.292948 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9hhrz" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.293427 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9hhrz" event={"ID":"b5858ebe-f677-4a48-b729-a8c4023b346d","Type":"ContainerDied","Data":"34c834fe0528a2889a5d97eb708c1926a0e1f1050a2007803303f4a20a1dcdda"} Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.293518 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34c834fe0528a2889a5d97eb708c1926a0e1f1050a2007803303f4a20a1dcdda" Dec 02 14:04:19 crc kubenswrapper[4625]: E1202 14:04:19.370237 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-6xqcm" podUID="314f653d-9ec6-47e4-af2a-aadc2440d332" Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.962467 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-556e-account-create-update-wgkgf"] Dec 02 14:04:19 crc kubenswrapper[4625]: I1202 14:04:19.976966 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceca-account-create-update-dhx9s"] Dec 02 14:04:19 crc kubenswrapper[4625]: W1202 14:04:19.988220 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4349bf5c_375b_45aa_b845_70ea55a35bf9.slice/crio-aaf02e0b674fd7413f60d5736f437a26a5e818ca5c3a8d09acc3add37e466190 WatchSource:0}: Error finding container aaf02e0b674fd7413f60d5736f437a26a5e818ca5c3a8d09acc3add37e466190: Status 404 returned error can't find the container with id aaf02e0b674fd7413f60d5736f437a26a5e818ca5c3a8d09acc3add37e466190 Dec 02 14:04:20 crc kubenswrapper[4625]: W1202 14:04:20.168384 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcab9b0dc_588c_4c86_a1de_25e3c71dce53.slice/crio-c3bbcbb77a9554abc9cee40f56d3c650262d700b41248bd7e83a6c8c89f21f6f WatchSource:0}: Error finding container c3bbcbb77a9554abc9cee40f56d3c650262d700b41248bd7e83a6c8c89f21f6f: Status 404 returned error can't find the container with id c3bbcbb77a9554abc9cee40f56d3c650262d700b41248bd7e83a6c8c89f21f6f Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.185342 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2866-account-create-update-4f925"] Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.209961 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-29vvf"] Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.265574 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4jlmz"] Dec 02 14:04:20 crc kubenswrapper[4625]: W1202 14:04:20.266015 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe753d78_885f_4972_9174_039b19cf978e.slice/crio-e9a0606c210621bbc9bb2598e84895ac2b7390a955bfea9a7cf141d934245900 WatchSource:0}: Error finding container e9a0606c210621bbc9bb2598e84895ac2b7390a955bfea9a7cf141d934245900: Status 404 returned error can't find the container with id e9a0606c210621bbc9bb2598e84895ac2b7390a955bfea9a7cf141d934245900 Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.275811 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vdstw"] Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.300666 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-flqpl"] Dec 02 14:04:20 crc kubenswrapper[4625]: W1202 14:04:20.306818 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf241e835_4f8c_4ef3_9e74_0e1f3ff87ad1.slice/crio-af118b796f5260dd352184073146cfabf30009da5c497bad2e662793273176d9 WatchSource:0}: Error finding container af118b796f5260dd352184073146cfabf30009da5c497bad2e662793273176d9: Status 404 returned error can't find the container with id af118b796f5260dd352184073146cfabf30009da5c497bad2e662793273176d9 Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.338360 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-29vvf" event={"ID":"d506dcc8-4877-472a-ad57-f88d656e84f3","Type":"ContainerStarted","Data":"622bfaf5b293ee425e1008ff3c6a8fd9a686faef681824065fae38f060b3cb6a"} Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.340478 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vdstw" event={"ID":"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe","Type":"ContainerStarted","Data":"7af19490c94361c1a06936b5b9e3d9616bfe0b68f7a3fb4460a3c500092865e7"} Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.342364 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-556e-account-create-update-wgkgf" event={"ID":"28e8e120-7dce-4848-ad79-d3b0ad33a778","Type":"ContainerStarted","Data":"7886b950410bf1122089f2292c67952ac1905c8e74f730bd2756c1b0a294ec68"} Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.361733 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.362496 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2866-account-create-update-4f925" event={"ID":"cab9b0dc-588c-4c86-a1de-25e3c71dce53","Type":"ContainerStarted","Data":"c3bbcbb77a9554abc9cee40f56d3c650262d700b41248bd7e83a6c8c89f21f6f"} Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.395675 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4jlmz" event={"ID":"be753d78-885f-4972-9174-039b19cf978e","Type":"ContainerStarted","Data":"e9a0606c210621bbc9bb2598e84895ac2b7390a955bfea9a7cf141d934245900"} Dec 02 14:04:20 crc kubenswrapper[4625]: I1202 14:04:20.398044 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceca-account-create-update-dhx9s" event={"ID":"4349bf5c-375b-45aa-b845-70ea55a35bf9","Type":"ContainerStarted","Data":"aaf02e0b674fd7413f60d5736f437a26a5e818ca5c3a8d09acc3add37e466190"} Dec 02 14:04:20 crc kubenswrapper[4625]: W1202 14:04:20.407950 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2953913_1ab3_4821_ab7d_8a20cb58ad90.slice/crio-0879b5844e4836dec4c506502ebf2fc677681e4ceabadfd2f8817a9f88036f8f WatchSource:0}: Error finding container 0879b5844e4836dec4c506502ebf2fc677681e4ceabadfd2f8817a9f88036f8f: Status 404 returned error can't find the container with id 0879b5844e4836dec4c506502ebf2fc677681e4ceabadfd2f8817a9f88036f8f Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.412711 4625 generic.go:334] "Generic (PLEG): container finished" podID="be753d78-885f-4972-9174-039b19cf978e" containerID="2ae2b3a57540d92f59eca7134bf8d8a5bb0f6d0f32b327e909a9dbab809fc78f" exitCode=0 Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.413102 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4jlmz" event={"ID":"be753d78-885f-4972-9174-039b19cf978e","Type":"ContainerDied","Data":"2ae2b3a57540d92f59eca7134bf8d8a5bb0f6d0f32b327e909a9dbab809fc78f"} Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.416461 4625 generic.go:334] "Generic (PLEG): container finished" podID="4349bf5c-375b-45aa-b845-70ea55a35bf9" containerID="da3a5d21fd9235617d2b65ad44ec840c7571abbbac8f437f39b5399d6af8ee87" exitCode=0 Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.416521 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceca-account-create-update-dhx9s" event={"ID":"4349bf5c-375b-45aa-b845-70ea55a35bf9","Type":"ContainerDied","Data":"da3a5d21fd9235617d2b65ad44ec840c7571abbbac8f437f39b5399d6af8ee87"} Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.418333 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-flqpl" event={"ID":"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1","Type":"ContainerStarted","Data":"af118b796f5260dd352184073146cfabf30009da5c497bad2e662793273176d9"} Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.422921 4625 generic.go:334] "Generic (PLEG): container finished" podID="d506dcc8-4877-472a-ad57-f88d656e84f3" containerID="76b64dbb33f1a96b0917cd51632bfb331aa3ddc10024bcc5b50180bc5c56cc42" exitCode=0 Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.422973 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-29vvf" event={"ID":"d506dcc8-4877-472a-ad57-f88d656e84f3","Type":"ContainerDied","Data":"76b64dbb33f1a96b0917cd51632bfb331aa3ddc10024bcc5b50180bc5c56cc42"} Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.425151 4625 generic.go:334] "Generic (PLEG): container finished" podID="4d57ba63-e3b3-40a9-b5a5-88c6654b04fe" containerID="461c387bdc655943790e15d9540cfd547337fd3f11ffc4d78fd9d65ef77a37cf" exitCode=0 Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.425230 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vdstw" event={"ID":"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe","Type":"ContainerDied","Data":"461c387bdc655943790e15d9540cfd547337fd3f11ffc4d78fd9d65ef77a37cf"} Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.427389 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"0879b5844e4836dec4c506502ebf2fc677681e4ceabadfd2f8817a9f88036f8f"} Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.435653 4625 generic.go:334] "Generic (PLEG): container finished" podID="28e8e120-7dce-4848-ad79-d3b0ad33a778" containerID="7d08095349d9631495e96eaa665d331969670996d29344ce0171c008cac42eef" exitCode=0 Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.435785 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-556e-account-create-update-wgkgf" event={"ID":"28e8e120-7dce-4848-ad79-d3b0ad33a778","Type":"ContainerDied","Data":"7d08095349d9631495e96eaa665d331969670996d29344ce0171c008cac42eef"} Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.440503 4625 generic.go:334] "Generic (PLEG): container finished" podID="cab9b0dc-588c-4c86-a1de-25e3c71dce53" containerID="51c0c00ed055b8bf0c37ebf84a7952efa79cb3c1cd36c31368cbefc23891dfbd" exitCode=0 Dec 02 14:04:21 crc kubenswrapper[4625]: I1202 14:04:21.440552 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2866-account-create-update-4f925" event={"ID":"cab9b0dc-588c-4c86-a1de-25e3c71dce53","Type":"ContainerDied","Data":"51c0c00ed055b8bf0c37ebf84a7952efa79cb3c1cd36c31368cbefc23891dfbd"} Dec 02 14:04:22 crc kubenswrapper[4625]: I1202 14:04:22.460366 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"fbbf6d8f06e986d5bf83f762668a7f79bc06b26c4458c30d34d707da2de06275"} Dec 02 14:04:22 crc kubenswrapper[4625]: I1202 14:04:22.461195 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"c3b80a7d0f9274b2106fb67dfd0a69ddbde62ad5d54c6e62b0e186d3b7abca3e"} Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.243083 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceca-account-create-update-dhx9s" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.291999 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vdstw" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.394158 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4349bf5c-375b-45aa-b845-70ea55a35bf9-operator-scripts\") pod \"4349bf5c-375b-45aa-b845-70ea55a35bf9\" (UID: \"4349bf5c-375b-45aa-b845-70ea55a35bf9\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.394208 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7p8q\" (UniqueName: \"kubernetes.io/projected/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-kube-api-access-c7p8q\") pod \"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe\" (UID: \"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.394291 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-operator-scripts\") pod \"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe\" (UID: \"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.394523 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlvm\" (UniqueName: \"kubernetes.io/projected/4349bf5c-375b-45aa-b845-70ea55a35bf9-kube-api-access-pmlvm\") pod \"4349bf5c-375b-45aa-b845-70ea55a35bf9\" (UID: \"4349bf5c-375b-45aa-b845-70ea55a35bf9\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.398591 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4349bf5c-375b-45aa-b845-70ea55a35bf9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4349bf5c-375b-45aa-b845-70ea55a35bf9" (UID: "4349bf5c-375b-45aa-b845-70ea55a35bf9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.398731 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d57ba63-e3b3-40a9-b5a5-88c6654b04fe" (UID: "4d57ba63-e3b3-40a9-b5a5-88c6654b04fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.416618 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-kube-api-access-c7p8q" (OuterVolumeSpecName: "kube-api-access-c7p8q") pod "4d57ba63-e3b3-40a9-b5a5-88c6654b04fe" (UID: "4d57ba63-e3b3-40a9-b5a5-88c6654b04fe"). InnerVolumeSpecName "kube-api-access-c7p8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.431688 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4349bf5c-375b-45aa-b845-70ea55a35bf9-kube-api-access-pmlvm" (OuterVolumeSpecName: "kube-api-access-pmlvm") pod "4349bf5c-375b-45aa-b845-70ea55a35bf9" (UID: "4349bf5c-375b-45aa-b845-70ea55a35bf9"). InnerVolumeSpecName "kube-api-access-pmlvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.485224 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceca-account-create-update-dhx9s" event={"ID":"4349bf5c-375b-45aa-b845-70ea55a35bf9","Type":"ContainerDied","Data":"aaf02e0b674fd7413f60d5736f437a26a5e818ca5c3a8d09acc3add37e466190"} Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.485286 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf02e0b674fd7413f60d5736f437a26a5e818ca5c3a8d09acc3add37e466190" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.485376 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceca-account-create-update-dhx9s" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.492457 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vdstw" event={"ID":"4d57ba63-e3b3-40a9-b5a5-88c6654b04fe","Type":"ContainerDied","Data":"7af19490c94361c1a06936b5b9e3d9616bfe0b68f7a3fb4460a3c500092865e7"} Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.492583 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af19490c94361c1a06936b5b9e3d9616bfe0b68f7a3fb4460a3c500092865e7" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.492652 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vdstw" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.512522 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmlvm\" (UniqueName: \"kubernetes.io/projected/4349bf5c-375b-45aa-b845-70ea55a35bf9-kube-api-access-pmlvm\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.512571 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4349bf5c-375b-45aa-b845-70ea55a35bf9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.512581 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7p8q\" (UniqueName: \"kubernetes.io/projected/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-kube-api-access-c7p8q\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.512590 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.524467 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"7b6e10fd3c3dc252a15ba12b69a3f3d010ba9c5f9a14e680efaf48d8c485161d"} Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.524526 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"d161d534fed6465231cc7988fc163e9b8ede6dd4df555f94f11da779687c24cd"} Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.578938 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2866-account-create-update-4f925" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.589047 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29vvf" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.593908 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-556e-account-create-update-wgkgf" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.717967 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzvr2\" (UniqueName: \"kubernetes.io/projected/d506dcc8-4877-472a-ad57-f88d656e84f3-kube-api-access-lzvr2\") pod \"d506dcc8-4877-472a-ad57-f88d656e84f3\" (UID: \"d506dcc8-4877-472a-ad57-f88d656e84f3\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.718898 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab9b0dc-588c-4c86-a1de-25e3c71dce53-operator-scripts\") pod \"cab9b0dc-588c-4c86-a1de-25e3c71dce53\" (UID: \"cab9b0dc-588c-4c86-a1de-25e3c71dce53\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.719003 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d506dcc8-4877-472a-ad57-f88d656e84f3-operator-scripts\") pod \"d506dcc8-4877-472a-ad57-f88d656e84f3\" (UID: \"d506dcc8-4877-472a-ad57-f88d656e84f3\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.719200 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28e8e120-7dce-4848-ad79-d3b0ad33a778-operator-scripts\") pod \"28e8e120-7dce-4848-ad79-d3b0ad33a778\" (UID: \"28e8e120-7dce-4848-ad79-d3b0ad33a778\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.719241 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2ljc\" (UniqueName: \"kubernetes.io/projected/28e8e120-7dce-4848-ad79-d3b0ad33a778-kube-api-access-v2ljc\") pod \"28e8e120-7dce-4848-ad79-d3b0ad33a778\" (UID: \"28e8e120-7dce-4848-ad79-d3b0ad33a778\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.719274 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpvzl\" (UniqueName: \"kubernetes.io/projected/cab9b0dc-588c-4c86-a1de-25e3c71dce53-kube-api-access-vpvzl\") pod \"cab9b0dc-588c-4c86-a1de-25e3c71dce53\" (UID: \"cab9b0dc-588c-4c86-a1de-25e3c71dce53\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.719591 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cab9b0dc-588c-4c86-a1de-25e3c71dce53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cab9b0dc-588c-4c86-a1de-25e3c71dce53" (UID: "cab9b0dc-588c-4c86-a1de-25e3c71dce53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.720020 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cab9b0dc-588c-4c86-a1de-25e3c71dce53-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.720165 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e8e120-7dce-4848-ad79-d3b0ad33a778-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28e8e120-7dce-4848-ad79-d3b0ad33a778" (UID: "28e8e120-7dce-4848-ad79-d3b0ad33a778"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.720913 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d506dcc8-4877-472a-ad57-f88d656e84f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d506dcc8-4877-472a-ad57-f88d656e84f3" (UID: "d506dcc8-4877-472a-ad57-f88d656e84f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.723802 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d506dcc8-4877-472a-ad57-f88d656e84f3-kube-api-access-lzvr2" (OuterVolumeSpecName: "kube-api-access-lzvr2") pod "d506dcc8-4877-472a-ad57-f88d656e84f3" (UID: "d506dcc8-4877-472a-ad57-f88d656e84f3"). InnerVolumeSpecName "kube-api-access-lzvr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.724937 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab9b0dc-588c-4c86-a1de-25e3c71dce53-kube-api-access-vpvzl" (OuterVolumeSpecName: "kube-api-access-vpvzl") pod "cab9b0dc-588c-4c86-a1de-25e3c71dce53" (UID: "cab9b0dc-588c-4c86-a1de-25e3c71dce53"). InnerVolumeSpecName "kube-api-access-vpvzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.733208 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e8e120-7dce-4848-ad79-d3b0ad33a778-kube-api-access-v2ljc" (OuterVolumeSpecName: "kube-api-access-v2ljc") pod "28e8e120-7dce-4848-ad79-d3b0ad33a778" (UID: "28e8e120-7dce-4848-ad79-d3b0ad33a778"). InnerVolumeSpecName "kube-api-access-v2ljc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.773171 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4jlmz" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.823358 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzvr2\" (UniqueName: \"kubernetes.io/projected/d506dcc8-4877-472a-ad57-f88d656e84f3-kube-api-access-lzvr2\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.823420 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d506dcc8-4877-472a-ad57-f88d656e84f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.823460 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28e8e120-7dce-4848-ad79-d3b0ad33a778-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.823474 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2ljc\" (UniqueName: \"kubernetes.io/projected/28e8e120-7dce-4848-ad79-d3b0ad33a778-kube-api-access-v2ljc\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.826526 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpvzl\" (UniqueName: \"kubernetes.io/projected/cab9b0dc-588c-4c86-a1de-25e3c71dce53-kube-api-access-vpvzl\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.928566 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be753d78-885f-4972-9174-039b19cf978e-operator-scripts\") pod \"be753d78-885f-4972-9174-039b19cf978e\" (UID: \"be753d78-885f-4972-9174-039b19cf978e\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.928670 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mttdf\" (UniqueName: \"kubernetes.io/projected/be753d78-885f-4972-9174-039b19cf978e-kube-api-access-mttdf\") pod \"be753d78-885f-4972-9174-039b19cf978e\" (UID: \"be753d78-885f-4972-9174-039b19cf978e\") " Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.929998 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be753d78-885f-4972-9174-039b19cf978e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be753d78-885f-4972-9174-039b19cf978e" (UID: "be753d78-885f-4972-9174-039b19cf978e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:23 crc kubenswrapper[4625]: I1202 14:04:23.933153 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be753d78-885f-4972-9174-039b19cf978e-kube-api-access-mttdf" (OuterVolumeSpecName: "kube-api-access-mttdf") pod "be753d78-885f-4972-9174-039b19cf978e" (UID: "be753d78-885f-4972-9174-039b19cf978e"). InnerVolumeSpecName "kube-api-access-mttdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.031731 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mttdf\" (UniqueName: \"kubernetes.io/projected/be753d78-885f-4972-9174-039b19cf978e-kube-api-access-mttdf\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.031783 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be753d78-885f-4972-9174-039b19cf978e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.547244 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2866-account-create-update-4f925" Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.547438 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2866-account-create-update-4f925" event={"ID":"cab9b0dc-588c-4c86-a1de-25e3c71dce53","Type":"ContainerDied","Data":"c3bbcbb77a9554abc9cee40f56d3c650262d700b41248bd7e83a6c8c89f21f6f"} Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.547958 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3bbcbb77a9554abc9cee40f56d3c650262d700b41248bd7e83a6c8c89f21f6f" Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.552854 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4jlmz" event={"ID":"be753d78-885f-4972-9174-039b19cf978e","Type":"ContainerDied","Data":"e9a0606c210621bbc9bb2598e84895ac2b7390a955bfea9a7cf141d934245900"} Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.552915 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9a0606c210621bbc9bb2598e84895ac2b7390a955bfea9a7cf141d934245900" Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.552918 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4jlmz" Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.555378 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-29vvf" event={"ID":"d506dcc8-4877-472a-ad57-f88d656e84f3","Type":"ContainerDied","Data":"622bfaf5b293ee425e1008ff3c6a8fd9a686faef681824065fae38f060b3cb6a"} Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.555409 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="622bfaf5b293ee425e1008ff3c6a8fd9a686faef681824065fae38f060b3cb6a" Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.555437 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-29vvf" Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.557044 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-556e-account-create-update-wgkgf" event={"ID":"28e8e120-7dce-4848-ad79-d3b0ad33a778","Type":"ContainerDied","Data":"7886b950410bf1122089f2292c67952ac1905c8e74f730bd2756c1b0a294ec68"} Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.557074 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7886b950410bf1122089f2292c67952ac1905c8e74f730bd2756c1b0a294ec68" Dec 02 14:04:24 crc kubenswrapper[4625]: I1202 14:04:24.558378 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-556e-account-create-update-wgkgf" Dec 02 14:04:27 crc kubenswrapper[4625]: I1202 14:04:27.593721 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-flqpl" event={"ID":"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1","Type":"ContainerStarted","Data":"8d8fedc6ad0e0f9647f59260e7936d338d63f2252c29cb096e1f2bbfc44a523d"} Dec 02 14:04:27 crc kubenswrapper[4625]: I1202 14:04:27.623997 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-flqpl" podStartSLOduration=13.635824724999999 podStartE2EDuration="20.623966901s" podCreationTimestamp="2025-12-02 14:04:07 +0000 UTC" firstStartedPulling="2025-12-02 14:04:20.319685778 +0000 UTC m=+1216.281862853" lastFinishedPulling="2025-12-02 14:04:27.307827954 +0000 UTC m=+1223.270005029" observedRunningTime="2025-12-02 14:04:27.612192194 +0000 UTC m=+1223.574369269" watchObservedRunningTime="2025-12-02 14:04:27.623966901 +0000 UTC m=+1223.586143986" Dec 02 14:04:28 crc kubenswrapper[4625]: I1202 14:04:28.689504 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"7509c193008b5a941c292f1a7295d5bfbbdc16c97310f9c85f92944e2be0cd56"} Dec 02 14:04:29 crc kubenswrapper[4625]: I1202 14:04:29.715699 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"ce43ef2703e660d01b1f8cb05c8fe39e745ba77ae8551975e7bb1b6d167ca7d9"} Dec 02 14:04:29 crc kubenswrapper[4625]: I1202 14:04:29.716196 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"c9ae501c0420a4f30cc4852459ab7ac95e74c3ff754dff5b42ec32d824dd2f02"} Dec 02 14:04:29 crc kubenswrapper[4625]: I1202 14:04:29.716209 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"109549e7bd8bc3e7a712e46a1a0bee0c3e3f6ff1c251212cb911f45cabc26c6c"} Dec 02 14:04:31 crc kubenswrapper[4625]: I1202 14:04:31.743820 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"fe92f4594fe0b9969b0676d4c87f71f5639094d30a0989f8a6e1297ed56e11fa"} Dec 02 14:04:32 crc kubenswrapper[4625]: I1202 14:04:32.761968 4625 generic.go:334] "Generic (PLEG): container finished" podID="f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1" containerID="8d8fedc6ad0e0f9647f59260e7936d338d63f2252c29cb096e1f2bbfc44a523d" exitCode=0 Dec 02 14:04:32 crc kubenswrapper[4625]: I1202 14:04:32.762018 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-flqpl" event={"ID":"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1","Type":"ContainerDied","Data":"8d8fedc6ad0e0f9647f59260e7936d338d63f2252c29cb096e1f2bbfc44a523d"} Dec 02 14:04:32 crc kubenswrapper[4625]: I1202 14:04:32.792741 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"838bff712af36ebaa383e504157daf38a12c329e219d02c82d0e683c3af6bc04"} Dec 02 14:04:32 crc kubenswrapper[4625]: I1202 14:04:32.792811 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"583ac725e916178308bff98a9fb8cfc11fa854d3a2a8b9c9acd5667f2c695d68"} Dec 02 14:04:32 crc kubenswrapper[4625]: I1202 14:04:32.792820 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"f69cce793d03d6cc4d91a068cc783ed0e8bf6241f4c8cbbc9aa9ca20d2466ea7"} Dec 02 14:04:32 crc kubenswrapper[4625]: I1202 14:04:32.792830 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"0a78dd31a1b6546f4819ae75e561f1aa0927f2f4778b74ed9cedcf8730ec0be5"} Dec 02 14:04:32 crc kubenswrapper[4625]: I1202 14:04:32.792842 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"01ea5280a033b20550251849294a168619728792a8e135ccbe83bebe5a000d9c"} Dec 02 14:04:33 crc kubenswrapper[4625]: I1202 14:04:33.810270 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b2953913-1ab3-4821-ab7d-8a20cb58ad90","Type":"ContainerStarted","Data":"5522e29a6d0a858ed8cc4fb807e8b49643c1034a8e4fbd314a93f5302a95c1b1"} Dec 02 14:04:33 crc kubenswrapper[4625]: I1202 14:04:33.851673 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.87411934 podStartE2EDuration="52.851642554s" podCreationTimestamp="2025-12-02 14:03:41 +0000 UTC" firstStartedPulling="2025-12-02 14:04:20.415290367 +0000 UTC m=+1216.377467432" lastFinishedPulling="2025-12-02 14:04:31.392813571 +0000 UTC m=+1227.354990646" observedRunningTime="2025-12-02 14:04:33.847375939 +0000 UTC m=+1229.809553014" watchObservedRunningTime="2025-12-02 14:04:33.851642554 +0000 UTC m=+1229.813819639" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.214020 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.236380 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-28h9c"] Dec 02 14:04:34 crc kubenswrapper[4625]: E1202 14:04:34.236969 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab9b0dc-588c-4c86-a1de-25e3c71dce53" containerName="mariadb-account-create-update" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237000 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab9b0dc-588c-4c86-a1de-25e3c71dce53" containerName="mariadb-account-create-update" Dec 02 14:04:34 crc kubenswrapper[4625]: E1202 14:04:34.237017 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4349bf5c-375b-45aa-b845-70ea55a35bf9" containerName="mariadb-account-create-update" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237027 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4349bf5c-375b-45aa-b845-70ea55a35bf9" containerName="mariadb-account-create-update" Dec 02 14:04:34 crc kubenswrapper[4625]: E1202 14:04:34.237059 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5858ebe-f677-4a48-b729-a8c4023b346d" containerName="swift-ring-rebalance" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237070 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5858ebe-f677-4a48-b729-a8c4023b346d" containerName="swift-ring-rebalance" Dec 02 14:04:34 crc kubenswrapper[4625]: E1202 14:04:34.237085 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e8e120-7dce-4848-ad79-d3b0ad33a778" containerName="mariadb-account-create-update" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237092 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e8e120-7dce-4848-ad79-d3b0ad33a778" containerName="mariadb-account-create-update" Dec 02 14:04:34 crc kubenswrapper[4625]: E1202 14:04:34.237104 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1" containerName="keystone-db-sync" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237112 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1" containerName="keystone-db-sync" Dec 02 14:04:34 crc kubenswrapper[4625]: E1202 14:04:34.237126 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d57ba63-e3b3-40a9-b5a5-88c6654b04fe" containerName="mariadb-database-create" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237133 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d57ba63-e3b3-40a9-b5a5-88c6654b04fe" containerName="mariadb-database-create" Dec 02 14:04:34 crc kubenswrapper[4625]: E1202 14:04:34.237159 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be753d78-885f-4972-9174-039b19cf978e" containerName="mariadb-database-create" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237168 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="be753d78-885f-4972-9174-039b19cf978e" containerName="mariadb-database-create" Dec 02 14:04:34 crc kubenswrapper[4625]: E1202 14:04:34.237179 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d506dcc8-4877-472a-ad57-f88d656e84f3" containerName="mariadb-database-create" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237188 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d506dcc8-4877-472a-ad57-f88d656e84f3" containerName="mariadb-database-create" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237455 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab9b0dc-588c-4c86-a1de-25e3c71dce53" containerName="mariadb-account-create-update" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237491 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="be753d78-885f-4972-9174-039b19cf978e" containerName="mariadb-database-create" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237510 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="d506dcc8-4877-472a-ad57-f88d656e84f3" containerName="mariadb-database-create" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237530 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e8e120-7dce-4848-ad79-d3b0ad33a778" containerName="mariadb-account-create-update" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237550 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5858ebe-f677-4a48-b729-a8c4023b346d" containerName="swift-ring-rebalance" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237567 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="4349bf5c-375b-45aa-b845-70ea55a35bf9" containerName="mariadb-account-create-update" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237584 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1" containerName="keystone-db-sync" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.237599 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d57ba63-e3b3-40a9-b5a5-88c6654b04fe" containerName="mariadb-database-create" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.238861 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.241868 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.256498 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-28h9c"] Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.390159 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwcd9\" (UniqueName: \"kubernetes.io/projected/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-kube-api-access-fwcd9\") pod \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.390271 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-combined-ca-bundle\") pod \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.390365 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-config-data\") pod \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\" (UID: \"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1\") " Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.390745 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-config\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.390842 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-svc\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.390873 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.390895 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.390959 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.390993 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csdgj\" (UniqueName: \"kubernetes.io/projected/f029f325-63da-4a0d-ba45-b79e41e91e72-kube-api-access-csdgj\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.405625 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-kube-api-access-fwcd9" (OuterVolumeSpecName: "kube-api-access-fwcd9") pod "f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1" (UID: "f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1"). InnerVolumeSpecName "kube-api-access-fwcd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.431510 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1" (UID: "f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.492683 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.493128 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csdgj\" (UniqueName: \"kubernetes.io/projected/f029f325-63da-4a0d-ba45-b79e41e91e72-kube-api-access-csdgj\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.493356 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-config\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.493494 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-svc\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.493600 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.493726 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.493896 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwcd9\" (UniqueName: \"kubernetes.io/projected/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-kube-api-access-fwcd9\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.494013 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.494099 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.494918 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.494957 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-config\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.495736 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-svc\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.496243 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.518060 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-config-data" (OuterVolumeSpecName: "config-data") pod "f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1" (UID: "f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.522874 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csdgj\" (UniqueName: \"kubernetes.io/projected/f029f325-63da-4a0d-ba45-b79e41e91e72-kube-api-access-csdgj\") pod \"dnsmasq-dns-764c5664d7-28h9c\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.559766 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.595396 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.829105 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-flqpl" Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.829102 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-flqpl" event={"ID":"f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1","Type":"ContainerDied","Data":"af118b796f5260dd352184073146cfabf30009da5c497bad2e662793273176d9"} Dec 02 14:04:34 crc kubenswrapper[4625]: I1202 14:04:34.829188 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af118b796f5260dd352184073146cfabf30009da5c497bad2e662793273176d9" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.176261 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-28h9c"] Dec 02 14:04:35 crc kubenswrapper[4625]: W1202 14:04:35.275557 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf029f325_63da_4a0d_ba45_b79e41e91e72.slice/crio-11632092b88309f417ce1ed52952cbfca153bd302f1e48a032e1d47356a6bbc8 WatchSource:0}: Error finding container 11632092b88309f417ce1ed52952cbfca153bd302f1e48a032e1d47356a6bbc8: Status 404 returned error can't find the container with id 11632092b88309f417ce1ed52952cbfca153bd302f1e48a032e1d47356a6bbc8 Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.279656 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p8prn"] Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.288298 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.303093 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.303442 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.308283 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.308669 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.321894 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v2szz" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.329623 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p8prn"] Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.414571 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-scripts\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.414882 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqjwv\" (UniqueName: \"kubernetes.io/projected/2dcf8f03-f5f2-46ea-964d-07412aee459d-kube-api-access-tqjwv\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.415116 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-fernet-keys\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.415180 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-combined-ca-bundle\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.415247 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-credential-keys\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.415369 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-config-data\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.460129 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-28h9c"] Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.524298 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-config-data\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.524606 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-scripts\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.524708 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqjwv\" (UniqueName: \"kubernetes.io/projected/2dcf8f03-f5f2-46ea-964d-07412aee459d-kube-api-access-tqjwv\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.524787 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-fernet-keys\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.524821 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-combined-ca-bundle\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.524872 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-credential-keys\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.530882 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-credential-keys\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.535996 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-scripts\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.537283 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-config-data\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.540765 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-fernet-keys\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.543770 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-combined-ca-bundle\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.594895 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqjwv\" (UniqueName: \"kubernetes.io/projected/2dcf8f03-f5f2-46ea-964d-07412aee459d-kube-api-access-tqjwv\") pod \"keystone-bootstrap-p8prn\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.614177 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-s6l5d"] Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.615915 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.696911 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-s6l5d"] Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.748930 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.749447 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-svc\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.749592 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.749754 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-config\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.749848 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghvcv\" (UniqueName: \"kubernetes.io/projected/fadcb0e2-13a4-405a-b831-2d6a7739bf32-kube-api-access-ghvcv\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.749945 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.789208 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.853072 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-config\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.853597 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghvcv\" (UniqueName: \"kubernetes.io/projected/fadcb0e2-13a4-405a-b831-2d6a7739bf32-kube-api-access-ghvcv\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.853630 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.853715 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.853790 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-svc\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.853835 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.855023 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.855892 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.855992 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.856473 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-svc\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.865727 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-config\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.875301 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cdb94fc6c-t5c7p"] Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.877296 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.901206 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.901534 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.901720 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jp5lq" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.901456 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.924123 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-28h9c" event={"ID":"f029f325-63da-4a0d-ba45-b79e41e91e72","Type":"ContainerStarted","Data":"11632092b88309f417ce1ed52952cbfca153bd302f1e48a032e1d47356a6bbc8"} Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.925131 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghvcv\" (UniqueName: \"kubernetes.io/projected/fadcb0e2-13a4-405a-b831-2d6a7739bf32-kube-api-access-ghvcv\") pod \"dnsmasq-dns-5959f8865f-s6l5d\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.945523 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cdb94fc6c-t5c7p"] Dec 02 14:04:35 crc kubenswrapper[4625]: I1202 14:04:35.982093 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.057331 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-config-data\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.057417 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-scripts\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.057499 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmh6b\" (UniqueName: \"kubernetes.io/projected/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-kube-api-access-gmh6b\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.057544 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-horizon-secret-key\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.057562 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-logs\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.113399 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-blvbp"] Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.114881 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.127747 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.128206 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j66qt" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.128570 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.159755 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmh6b\" (UniqueName: \"kubernetes.io/projected/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-kube-api-access-gmh6b\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.159824 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-horizon-secret-key\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.159896 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-logs\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.160000 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-config-data\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.160028 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-scripts\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.160835 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-scripts\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.161095 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-logs\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.161651 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fdkpr"] Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.163016 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.168300 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-config-data\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.179905 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-horizon-secret-key\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.180730 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6jr6l" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.180953 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.195413 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-blvbp"] Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.216475 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fdkpr"] Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.269754 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-config-data\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.270238 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-combined-ca-bundle\") pod \"barbican-db-sync-fdkpr\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.270360 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-db-sync-config-data\") pod \"barbican-db-sync-fdkpr\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.270458 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-combined-ca-bundle\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.270563 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-db-sync-config-data\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.270697 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29ce362-3978-4713-833d-49aab29a394c-etc-machine-id\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.270772 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-scripts\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.270875 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g65z6\" (UniqueName: \"kubernetes.io/projected/c29ce362-3978-4713-833d-49aab29a394c-kube-api-access-g65z6\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.270984 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qjmj\" (UniqueName: \"kubernetes.io/projected/c2b840d2-7458-4769-9650-e62ff8676008-kube-api-access-9qjmj\") pod \"barbican-db-sync-fdkpr\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.300726 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmh6b\" (UniqueName: \"kubernetes.io/projected/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-kube-api-access-gmh6b\") pod \"horizon-6cdb94fc6c-t5c7p\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.382186 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-tpfs8"] Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.384274 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-combined-ca-bundle\") pod \"barbican-db-sync-fdkpr\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.413193 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-db-sync-config-data\") pod \"barbican-db-sync-fdkpr\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.413865 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-combined-ca-bundle\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.414034 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-db-sync-config-data\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.414179 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29ce362-3978-4713-833d-49aab29a394c-etc-machine-id\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.414365 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-scripts\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.414584 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g65z6\" (UniqueName: \"kubernetes.io/projected/c29ce362-3978-4713-833d-49aab29a394c-kube-api-access-g65z6\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.414787 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qjmj\" (UniqueName: \"kubernetes.io/projected/c2b840d2-7458-4769-9650-e62ff8676008-kube-api-access-9qjmj\") pod \"barbican-db-sync-fdkpr\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.415020 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-config-data\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.421824 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29ce362-3978-4713-833d-49aab29a394c-etc-machine-id\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.405301 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-combined-ca-bundle\") pod \"barbican-db-sync-fdkpr\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.421943 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-db-sync-config-data\") pod \"barbican-db-sync-fdkpr\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.452693 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-db-sync-config-data\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.456467 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-config-data\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.458709 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.470107 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-combined-ca-bundle\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.479387 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-scripts\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.498559 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.499445 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.499505 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tmhd7" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.521905 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8ff9c589c-cnghx"] Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.522999 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-config\") pod \"neutron-db-sync-tpfs8\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.528342 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-combined-ca-bundle\") pod \"neutron-db-sync-tpfs8\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.528878 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfx54\" (UniqueName: \"kubernetes.io/projected/506784cb-9737-438b-bd53-4719527b47bf-kube-api-access-nfx54\") pod \"neutron-db-sync-tpfs8\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.523917 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.527126 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.585625 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tpfs8"] Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.603346 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g65z6\" (UniqueName: \"kubernetes.io/projected/c29ce362-3978-4713-833d-49aab29a394c-kube-api-access-g65z6\") pod \"cinder-db-sync-blvbp\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.627139 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qjmj\" (UniqueName: \"kubernetes.io/projected/c2b840d2-7458-4769-9650-e62ff8676008-kube-api-access-9qjmj\") pod \"barbican-db-sync-fdkpr\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.637734 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfx54\" (UniqueName: \"kubernetes.io/projected/506784cb-9737-438b-bd53-4719527b47bf-kube-api-access-nfx54\") pod \"neutron-db-sync-tpfs8\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.642549 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b23945-e07f-4c02-9cb4-1515c58af99b-logs\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.642703 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-config\") pod \"neutron-db-sync-tpfs8\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.642888 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-combined-ca-bundle\") pod \"neutron-db-sync-tpfs8\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.643069 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q22zt\" (UniqueName: \"kubernetes.io/projected/d6b23945-e07f-4c02-9cb4-1515c58af99b-kube-api-access-q22zt\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.643159 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-scripts\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.643261 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-config-data\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.643419 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6b23945-e07f-4c02-9cb4-1515c58af99b-horizon-secret-key\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.651295 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-config\") pod \"neutron-db-sync-tpfs8\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.656415 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8ff9c589c-cnghx"] Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.657869 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-combined-ca-bundle\") pod \"neutron-db-sync-tpfs8\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.682791 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-blvbp" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.691538 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfx54\" (UniqueName: \"kubernetes.io/projected/506784cb-9737-438b-bd53-4719527b47bf-kube-api-access-nfx54\") pod \"neutron-db-sync-tpfs8\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.723274 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.799477 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b23945-e07f-4c02-9cb4-1515c58af99b-logs\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.800143 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q22zt\" (UniqueName: \"kubernetes.io/projected/d6b23945-e07f-4c02-9cb4-1515c58af99b-kube-api-access-q22zt\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.800235 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-scripts\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.800282 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-config-data\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.806121 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-s6l5d"] Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.813084 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6b23945-e07f-4c02-9cb4-1515c58af99b-horizon-secret-key\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.815338 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b23945-e07f-4c02-9cb4-1515c58af99b-logs\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.815918 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-scripts\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.844568 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.866441 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6b23945-e07f-4c02-9cb4-1515c58af99b-horizon-secret-key\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.867678 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-config-data\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.869682 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q22zt\" (UniqueName: \"kubernetes.io/projected/d6b23945-e07f-4c02-9cb4-1515c58af99b-kube-api-access-q22zt\") pod \"horizon-8ff9c589c-cnghx\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:36 crc kubenswrapper[4625]: I1202 14:04:36.953497 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.041365 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2swhw"] Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.053327 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2swhw"] Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.053381 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.054623 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.055858 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.060738 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zlllv" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.061075 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.061226 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.061366 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.061558 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.065390 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qr7hw"] Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.072173 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.129573 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qr7hw"] Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.229444 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.235861 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-combined-ca-bundle\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.236095 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8n99\" (UniqueName: \"kubernetes.io/projected/ef2b49b4-2807-4a26-8876-3cc189692b73-kube-api-access-c8n99\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.236212 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-scripts\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.236412 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.236527 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-config-data\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.236650 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.236769 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r27b6\" (UniqueName: \"kubernetes.io/projected/d7887abf-7df6-4058-b3f0-e58295b168c1-kube-api-access-r27b6\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.246653 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-config\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.246881 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-config-data\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.247060 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7887abf-7df6-4058-b3f0-e58295b168c1-logs\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.247181 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpgh\" (UniqueName: \"kubernetes.io/projected/abbd3215-4ced-473b-84a7-1f859e2782b2-kube-api-access-rgpgh\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.247333 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-scripts\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.247446 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-run-httpd\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.247693 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.248194 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-log-httpd\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.255187 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.255591 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.255839 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358412 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-combined-ca-bundle\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358476 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8n99\" (UniqueName: \"kubernetes.io/projected/ef2b49b4-2807-4a26-8876-3cc189692b73-kube-api-access-c8n99\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358498 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-scripts\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358547 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358578 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-config-data\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358610 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358639 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r27b6\" (UniqueName: \"kubernetes.io/projected/d7887abf-7df6-4058-b3f0-e58295b168c1-kube-api-access-r27b6\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358665 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-config\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358699 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-config-data\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358735 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7887abf-7df6-4058-b3f0-e58295b168c1-logs\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358759 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgpgh\" (UniqueName: \"kubernetes.io/projected/abbd3215-4ced-473b-84a7-1f859e2782b2-kube-api-access-rgpgh\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358778 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-scripts\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358792 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-run-httpd\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358827 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358850 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-log-httpd\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358887 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358905 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.358933 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.359980 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.372670 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7887abf-7df6-4058-b3f0-e58295b168c1-logs\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.374687 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.389294 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-run-httpd\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.390013 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.390296 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-log-httpd\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.390772 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.394576 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-config\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.396843 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.423827 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-config-data\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.424337 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8n99\" (UniqueName: \"kubernetes.io/projected/ef2b49b4-2807-4a26-8876-3cc189692b73-kube-api-access-c8n99\") pod \"dnsmasq-dns-58dd9ff6bc-qr7hw\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.424500 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-scripts\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.425575 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.426542 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-config-data\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.426555 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-scripts\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.428165 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-combined-ca-bundle\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.434858 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r27b6\" (UniqueName: \"kubernetes.io/projected/d7887abf-7df6-4058-b3f0-e58295b168c1-kube-api-access-r27b6\") pod \"placement-db-sync-2swhw\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.436696 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgpgh\" (UniqueName: \"kubernetes.io/projected/abbd3215-4ced-473b-84a7-1f859e2782b2-kube-api-access-rgpgh\") pod \"ceilometer-0\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.512128 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.633262 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.642942 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p8prn"] Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.706436 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-s6l5d"] Dec 02 14:04:37 crc kubenswrapper[4625]: I1202 14:04:37.723766 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2swhw" Dec 02 14:04:38 crc kubenswrapper[4625]: I1202 14:04:38.009077 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6xqcm" event={"ID":"314f653d-9ec6-47e4-af2a-aadc2440d332","Type":"ContainerStarted","Data":"d0493e0f0a3c8678839326776275f5db0684b9f0dfa81b8729b8bc8fc7e290d8"} Dec 02 14:04:38 crc kubenswrapper[4625]: I1202 14:04:38.065517 4625 generic.go:334] "Generic (PLEG): container finished" podID="f029f325-63da-4a0d-ba45-b79e41e91e72" containerID="b07594e59866df473d3f51bb81b469d8b88dee50063625b94ec1f157f86c6896" exitCode=0 Dec 02 14:04:38 crc kubenswrapper[4625]: I1202 14:04:38.066285 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-28h9c" event={"ID":"f029f325-63da-4a0d-ba45-b79e41e91e72","Type":"ContainerDied","Data":"b07594e59866df473d3f51bb81b469d8b88dee50063625b94ec1f157f86c6896"} Dec 02 14:04:38 crc kubenswrapper[4625]: I1202 14:04:38.105429 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6xqcm" podStartSLOduration=7.029406681 podStartE2EDuration="49.105399375s" podCreationTimestamp="2025-12-02 14:03:49 +0000 UTC" firstStartedPulling="2025-12-02 14:03:52.875888048 +0000 UTC m=+1188.838065123" lastFinishedPulling="2025-12-02 14:04:34.951880742 +0000 UTC m=+1230.914057817" observedRunningTime="2025-12-02 14:04:38.050256997 +0000 UTC m=+1234.012434072" watchObservedRunningTime="2025-12-02 14:04:38.105399375 +0000 UTC m=+1234.067576450" Dec 02 14:04:38 crc kubenswrapper[4625]: I1202 14:04:38.140953 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cdb94fc6c-t5c7p"] Dec 02 14:04:38 crc kubenswrapper[4625]: I1202 14:04:38.173826 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8prn" event={"ID":"2dcf8f03-f5f2-46ea-964d-07412aee459d","Type":"ContainerStarted","Data":"f4f329d036c962c4d9cce85c60078f79efb21501a47ec269915846ee99bb6cc0"} Dec 02 14:04:38 crc kubenswrapper[4625]: I1202 14:04:38.175999 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" event={"ID":"fadcb0e2-13a4-405a-b831-2d6a7739bf32","Type":"ContainerStarted","Data":"600cac08a316fe626601b20e5bf411091a490a0bc57351f9029148edb6b579ea"} Dec 02 14:04:38 crc kubenswrapper[4625]: I1202 14:04:38.211828 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fdkpr"] Dec 02 14:04:39 crc kubenswrapper[4625]: W1202 14:04:38.319894 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2b840d2_7458_4769_9650_e62ff8676008.slice/crio-135159ad9c1fadaa0cafd79a9e6209651a8d002335e4760d482873b674bbcce7 WatchSource:0}: Error finding container 135159ad9c1fadaa0cafd79a9e6209651a8d002335e4760d482873b674bbcce7: Status 404 returned error can't find the container with id 135159ad9c1fadaa0cafd79a9e6209651a8d002335e4760d482873b674bbcce7 Dec 02 14:04:39 crc kubenswrapper[4625]: I1202 14:04:38.498092 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tpfs8"] Dec 02 14:04:39 crc kubenswrapper[4625]: I1202 14:04:39.188470 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdb94fc6c-t5c7p" event={"ID":"52ce2aac-6c7b-4458-b4b8-b1467eb43de0","Type":"ContainerStarted","Data":"dc8afc995f2b611dffc314779825532645d76ae8c8be4cb3a12517ce119dd37f"} Dec 02 14:04:39 crc kubenswrapper[4625]: I1202 14:04:39.191005 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fdkpr" event={"ID":"c2b840d2-7458-4769-9650-e62ff8676008","Type":"ContainerStarted","Data":"135159ad9c1fadaa0cafd79a9e6209651a8d002335e4760d482873b674bbcce7"} Dec 02 14:04:39 crc kubenswrapper[4625]: I1202 14:04:39.195461 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tpfs8" event={"ID":"506784cb-9737-438b-bd53-4719527b47bf","Type":"ContainerStarted","Data":"631d79035e0e3fd8d298c70b03579590f9ba71682e1ded470cdf3dc32d86f038"} Dec 02 14:04:39 crc kubenswrapper[4625]: I1202 14:04:39.195512 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tpfs8" event={"ID":"506784cb-9737-438b-bd53-4719527b47bf","Type":"ContainerStarted","Data":"53d96f40596e541c9b2ca89a76912bcc3c765370da063cc520029b960a718c48"} Dec 02 14:04:39 crc kubenswrapper[4625]: I1202 14:04:39.199187 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8prn" event={"ID":"2dcf8f03-f5f2-46ea-964d-07412aee459d","Type":"ContainerStarted","Data":"dde77d1e0c32452a78e4539fea8157b9f734116b752418838b9767f2d7e247bd"} Dec 02 14:04:39 crc kubenswrapper[4625]: I1202 14:04:39.202734 4625 generic.go:334] "Generic (PLEG): container finished" podID="fadcb0e2-13a4-405a-b831-2d6a7739bf32" containerID="f3ec6248de430218df10f13501aea5de184adaff575fb77031f9eee3ade90b48" exitCode=0 Dec 02 14:04:39 crc kubenswrapper[4625]: I1202 14:04:39.202772 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" event={"ID":"fadcb0e2-13a4-405a-b831-2d6a7739bf32","Type":"ContainerDied","Data":"f3ec6248de430218df10f13501aea5de184adaff575fb77031f9eee3ade90b48"} Dec 02 14:04:39 crc kubenswrapper[4625]: I1202 14:04:39.222780 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-tpfs8" podStartSLOduration=3.222753033 podStartE2EDuration="3.222753033s" podCreationTimestamp="2025-12-02 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:39.215532289 +0000 UTC m=+1235.177709364" watchObservedRunningTime="2025-12-02 14:04:39.222753033 +0000 UTC m=+1235.184930108" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.239715 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p8prn" podStartSLOduration=5.239666793 podStartE2EDuration="5.239666793s" podCreationTimestamp="2025-12-02 14:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:39.295859765 +0000 UTC m=+1235.258036840" watchObservedRunningTime="2025-12-02 14:04:40.239666793 +0000 UTC m=+1236.201843868" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.246990 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cdb94fc6c-t5c7p"] Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.313874 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8ff9c589c-cnghx"] Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.402790 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b6bf499d5-zkn4m"] Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.405021 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.474571 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b6bf499d5-zkn4m"] Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.581015 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac678a62-6d5a-4548-94ff-384289748b18-logs\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.581566 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac678a62-6d5a-4548-94ff-384289748b18-horizon-secret-key\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.581629 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf9k4\" (UniqueName: \"kubernetes.io/projected/ac678a62-6d5a-4548-94ff-384289748b18-kube-api-access-sf9k4\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.581666 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-config-data\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.581719 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-scripts\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.601425 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-blvbp"] Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.648407 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.684863 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac678a62-6d5a-4548-94ff-384289748b18-horizon-secret-key\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.685669 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf9k4\" (UniqueName: \"kubernetes.io/projected/ac678a62-6d5a-4548-94ff-384289748b18-kube-api-access-sf9k4\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.685800 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-config-data\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.685906 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-scripts\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.686051 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac678a62-6d5a-4548-94ff-384289748b18-logs\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.687186 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-scripts\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.687494 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac678a62-6d5a-4548-94ff-384289748b18-logs\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.688235 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-config-data\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.695280 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac678a62-6d5a-4548-94ff-384289748b18-horizon-secret-key\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.737810 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf9k4\" (UniqueName: \"kubernetes.io/projected/ac678a62-6d5a-4548-94ff-384289748b18-kube-api-access-sf9k4\") pod \"horizon-6b6bf499d5-zkn4m\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.745610 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qr7hw"] Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.750001 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.826230 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.832190 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.837115 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.851425 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2swhw"] Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.925791 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghvcv\" (UniqueName: \"kubernetes.io/projected/fadcb0e2-13a4-405a-b831-2d6a7739bf32-kube-api-access-ghvcv\") pod \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.927770 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csdgj\" (UniqueName: \"kubernetes.io/projected/f029f325-63da-4a0d-ba45-b79e41e91e72-kube-api-access-csdgj\") pod \"f029f325-63da-4a0d-ba45-b79e41e91e72\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.927860 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-sb\") pod \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.927915 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-svc\") pod \"f029f325-63da-4a0d-ba45-b79e41e91e72\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.928045 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-config\") pod \"f029f325-63da-4a0d-ba45-b79e41e91e72\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.928356 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-swift-storage-0\") pod \"f029f325-63da-4a0d-ba45-b79e41e91e72\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.928457 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-svc\") pod \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.928543 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-swift-storage-0\") pod \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.928609 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-nb\") pod \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.928662 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-config\") pod \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\" (UID: \"fadcb0e2-13a4-405a-b831-2d6a7739bf32\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.928759 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-sb\") pod \"f029f325-63da-4a0d-ba45-b79e41e91e72\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " Dec 02 14:04:40 crc kubenswrapper[4625]: I1202 14:04:40.928885 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-nb\") pod \"f029f325-63da-4a0d-ba45-b79e41e91e72\" (UID: \"f029f325-63da-4a0d-ba45-b79e41e91e72\") " Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.023719 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f029f325-63da-4a0d-ba45-b79e41e91e72" (UID: "f029f325-63da-4a0d-ba45-b79e41e91e72"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.030089 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f029f325-63da-4a0d-ba45-b79e41e91e72" (UID: "f029f325-63da-4a0d-ba45-b79e41e91e72"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.039123 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f029f325-63da-4a0d-ba45-b79e41e91e72-kube-api-access-csdgj" (OuterVolumeSpecName: "kube-api-access-csdgj") pod "f029f325-63da-4a0d-ba45-b79e41e91e72" (UID: "f029f325-63da-4a0d-ba45-b79e41e91e72"). InnerVolumeSpecName "kube-api-access-csdgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.040229 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fadcb0e2-13a4-405a-b831-2d6a7739bf32-kube-api-access-ghvcv" (OuterVolumeSpecName: "kube-api-access-ghvcv") pod "fadcb0e2-13a4-405a-b831-2d6a7739bf32" (UID: "fadcb0e2-13a4-405a-b831-2d6a7739bf32"). InnerVolumeSpecName "kube-api-access-ghvcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.070634 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.070688 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghvcv\" (UniqueName: \"kubernetes.io/projected/fadcb0e2-13a4-405a-b831-2d6a7739bf32-kube-api-access-ghvcv\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.070703 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csdgj\" (UniqueName: \"kubernetes.io/projected/f029f325-63da-4a0d-ba45-b79e41e91e72-kube-api-access-csdgj\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.070719 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.123246 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fadcb0e2-13a4-405a-b831-2d6a7739bf32" (UID: "fadcb0e2-13a4-405a-b831-2d6a7739bf32"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.124505 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-config" (OuterVolumeSpecName: "config") pod "fadcb0e2-13a4-405a-b831-2d6a7739bf32" (UID: "fadcb0e2-13a4-405a-b831-2d6a7739bf32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.125347 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-config" (OuterVolumeSpecName: "config") pod "f029f325-63da-4a0d-ba45-b79e41e91e72" (UID: "f029f325-63da-4a0d-ba45-b79e41e91e72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.136946 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fadcb0e2-13a4-405a-b831-2d6a7739bf32" (UID: "fadcb0e2-13a4-405a-b831-2d6a7739bf32"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.139389 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f029f325-63da-4a0d-ba45-b79e41e91e72" (UID: "f029f325-63da-4a0d-ba45-b79e41e91e72"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.146992 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fadcb0e2-13a4-405a-b831-2d6a7739bf32" (UID: "fadcb0e2-13a4-405a-b831-2d6a7739bf32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.159772 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fadcb0e2-13a4-405a-b831-2d6a7739bf32" (UID: "fadcb0e2-13a4-405a-b831-2d6a7739bf32"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.171185 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f029f325-63da-4a0d-ba45-b79e41e91e72" (UID: "f029f325-63da-4a0d-ba45-b79e41e91e72"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.172991 4625 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.173084 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.173429 4625 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.173524 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.173590 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.173650 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.173706 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fadcb0e2-13a4-405a-b831-2d6a7739bf32-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.173822 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f029f325-63da-4a0d-ba45-b79e41e91e72-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.254015 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd3215-4ced-473b-84a7-1f859e2782b2","Type":"ContainerStarted","Data":"a297e061e447bbbd0c480fe8db00e9219037278e3ff0f383268234c6f71bb7d3"} Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.270066 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-28h9c" event={"ID":"f029f325-63da-4a0d-ba45-b79e41e91e72","Type":"ContainerDied","Data":"11632092b88309f417ce1ed52952cbfca153bd302f1e48a032e1d47356a6bbc8"} Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.270127 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-28h9c" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.270227 4625 scope.go:117] "RemoveContainer" containerID="b07594e59866df473d3f51bb81b469d8b88dee50063625b94ec1f157f86c6896" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.290137 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-blvbp" event={"ID":"c29ce362-3978-4713-833d-49aab29a394c","Type":"ContainerStarted","Data":"0d14af64aecf7fe87770ecfce2961b50335df6b3ae7d09489d352aacebb089e6"} Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.296403 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8ff9c589c-cnghx" event={"ID":"d6b23945-e07f-4c02-9cb4-1515c58af99b","Type":"ContainerStarted","Data":"c71689a409db98c560670f8b4ddea4bfd51aa60962606ca0f248cac8582b692b"} Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.331977 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2swhw" event={"ID":"d7887abf-7df6-4058-b3f0-e58295b168c1","Type":"ContainerStarted","Data":"aea39930c90741ba0ab23f18d8d1e76b877e35986e2177d81786f53fb96387af"} Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.336570 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" event={"ID":"fadcb0e2-13a4-405a-b831-2d6a7739bf32","Type":"ContainerDied","Data":"600cac08a316fe626601b20e5bf411091a490a0bc57351f9029148edb6b579ea"} Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.336687 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-s6l5d" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.353414 4625 scope.go:117] "RemoveContainer" containerID="f3ec6248de430218df10f13501aea5de184adaff575fb77031f9eee3ade90b48" Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.438857 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" event={"ID":"ef2b49b4-2807-4a26-8876-3cc189692b73","Type":"ContainerStarted","Data":"5d037a6279c32de28c5f68b251590ea6ab330f446284b0c2746846cd3ded9cbe"} Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.469459 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-28h9c"] Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.501571 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-28h9c"] Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.573238 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-s6l5d"] Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.608179 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-s6l5d"] Dec 02 14:04:41 crc kubenswrapper[4625]: I1202 14:04:41.870328 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b6bf499d5-zkn4m"] Dec 02 14:04:41 crc kubenswrapper[4625]: W1202 14:04:41.878902 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac678a62_6d5a_4548_94ff_384289748b18.slice/crio-edc4e2dfa5a28627909057c3a23f5780bbc89469b50786cd2c9a30ca22e002bd WatchSource:0}: Error finding container edc4e2dfa5a28627909057c3a23f5780bbc89469b50786cd2c9a30ca22e002bd: Status 404 returned error can't find the container with id edc4e2dfa5a28627909057c3a23f5780bbc89469b50786cd2c9a30ca22e002bd Dec 02 14:04:42 crc kubenswrapper[4625]: I1202 14:04:42.473979 4625 generic.go:334] "Generic (PLEG): container finished" podID="ef2b49b4-2807-4a26-8876-3cc189692b73" containerID="2163b98c42fbd777462561f6bd84b474f4f870c1d4822486c4572c9ab85873e8" exitCode=0 Dec 02 14:04:42 crc kubenswrapper[4625]: I1202 14:04:42.474245 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" event={"ID":"ef2b49b4-2807-4a26-8876-3cc189692b73","Type":"ContainerDied","Data":"2163b98c42fbd777462561f6bd84b474f4f870c1d4822486c4572c9ab85873e8"} Dec 02 14:04:42 crc kubenswrapper[4625]: I1202 14:04:42.507106 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b6bf499d5-zkn4m" event={"ID":"ac678a62-6d5a-4548-94ff-384289748b18","Type":"ContainerStarted","Data":"edc4e2dfa5a28627909057c3a23f5780bbc89469b50786cd2c9a30ca22e002bd"} Dec 02 14:04:42 crc kubenswrapper[4625]: I1202 14:04:42.873506 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f029f325-63da-4a0d-ba45-b79e41e91e72" path="/var/lib/kubelet/pods/f029f325-63da-4a0d-ba45-b79e41e91e72/volumes" Dec 02 14:04:42 crc kubenswrapper[4625]: I1202 14:04:42.874952 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fadcb0e2-13a4-405a-b831-2d6a7739bf32" path="/var/lib/kubelet/pods/fadcb0e2-13a4-405a-b831-2d6a7739bf32/volumes" Dec 02 14:04:43 crc kubenswrapper[4625]: I1202 14:04:43.548563 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" event={"ID":"ef2b49b4-2807-4a26-8876-3cc189692b73","Type":"ContainerStarted","Data":"84273fd170c0695e5976de2c52c5c3eb60351d7db691337c9e63218b130c0ec8"} Dec 02 14:04:43 crc kubenswrapper[4625]: I1202 14:04:43.550302 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:43 crc kubenswrapper[4625]: I1202 14:04:43.576187 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" podStartSLOduration=7.5761558 podStartE2EDuration="7.5761558s" podCreationTimestamp="2025-12-02 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:04:43.57169063 +0000 UTC m=+1239.533867715" watchObservedRunningTime="2025-12-02 14:04:43.5761558 +0000 UTC m=+1239.538332875" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.601923 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8ff9c589c-cnghx"] Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.677665 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c94877878-jvhxv"] Dec 02 14:04:45 crc kubenswrapper[4625]: E1202 14:04:45.678899 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f029f325-63da-4a0d-ba45-b79e41e91e72" containerName="init" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.678926 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f029f325-63da-4a0d-ba45-b79e41e91e72" containerName="init" Dec 02 14:04:45 crc kubenswrapper[4625]: E1202 14:04:45.679001 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fadcb0e2-13a4-405a-b831-2d6a7739bf32" containerName="init" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.679012 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="fadcb0e2-13a4-405a-b831-2d6a7739bf32" containerName="init" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.708901 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="fadcb0e2-13a4-405a-b831-2d6a7739bf32" containerName="init" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.708989 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f029f325-63da-4a0d-ba45-b79e41e91e72" containerName="init" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.721284 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.726063 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.799403 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-combined-ca-bundle\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.799455 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-tls-certs\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.799483 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-config-data\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.799550 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b6d9a8-9eed-441e-a627-83774df65ed9-logs\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.799591 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btcvz\" (UniqueName: \"kubernetes.io/projected/04b6d9a8-9eed-441e-a627-83774df65ed9-kube-api-access-btcvz\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.799634 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-secret-key\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.799706 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-scripts\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.826631 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c94877878-jvhxv"] Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.853164 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b6bf499d5-zkn4m"] Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.901908 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-scripts\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.902420 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-combined-ca-bundle\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.902546 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-tls-certs\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.902640 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-config-data\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.903175 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b6d9a8-9eed-441e-a627-83774df65ed9-logs\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.903434 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btcvz\" (UniqueName: \"kubernetes.io/projected/04b6d9a8-9eed-441e-a627-83774df65ed9-kube-api-access-btcvz\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.903594 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-secret-key\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.905671 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b6d9a8-9eed-441e-a627-83774df65ed9-logs\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.905934 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-config-data\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.906909 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-scripts\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.915948 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-tls-certs\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.918891 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7dc4db5bfb-zbs4l"] Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.920021 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-secret-key\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.920734 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.939421 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-combined-ca-bundle\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.970516 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dc4db5bfb-zbs4l"] Dec 02 14:04:45 crc kubenswrapper[4625]: I1202 14:04:45.981204 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btcvz\" (UniqueName: \"kubernetes.io/projected/04b6d9a8-9eed-441e-a627-83774df65ed9-kube-api-access-btcvz\") pod \"horizon-5c94877878-jvhxv\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.005681 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/92339196-3d33-4b76-9ba2-81e1a8373e84-horizon-tls-certs\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.005758 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92339196-3d33-4b76-9ba2-81e1a8373e84-logs\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.005861 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92339196-3d33-4b76-9ba2-81e1a8373e84-config-data\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.005887 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7dw7\" (UniqueName: \"kubernetes.io/projected/92339196-3d33-4b76-9ba2-81e1a8373e84-kube-api-access-h7dw7\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.006030 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92339196-3d33-4b76-9ba2-81e1a8373e84-horizon-secret-key\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.006069 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92339196-3d33-4b76-9ba2-81e1a8373e84-combined-ca-bundle\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.006121 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92339196-3d33-4b76-9ba2-81e1a8373e84-scripts\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.094969 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.108053 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92339196-3d33-4b76-9ba2-81e1a8373e84-horizon-secret-key\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.108424 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92339196-3d33-4b76-9ba2-81e1a8373e84-combined-ca-bundle\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.108472 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92339196-3d33-4b76-9ba2-81e1a8373e84-scripts\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.108545 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/92339196-3d33-4b76-9ba2-81e1a8373e84-horizon-tls-certs\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.108571 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92339196-3d33-4b76-9ba2-81e1a8373e84-logs\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.108630 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92339196-3d33-4b76-9ba2-81e1a8373e84-config-data\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.108648 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7dw7\" (UniqueName: \"kubernetes.io/projected/92339196-3d33-4b76-9ba2-81e1a8373e84-kube-api-access-h7dw7\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.110474 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92339196-3d33-4b76-9ba2-81e1a8373e84-logs\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.112068 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92339196-3d33-4b76-9ba2-81e1a8373e84-scripts\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.115611 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92339196-3d33-4b76-9ba2-81e1a8373e84-config-data\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.137362 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/92339196-3d33-4b76-9ba2-81e1a8373e84-horizon-tls-certs\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.137978 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92339196-3d33-4b76-9ba2-81e1a8373e84-horizon-secret-key\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.139481 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92339196-3d33-4b76-9ba2-81e1a8373e84-combined-ca-bundle\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.145805 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7dw7\" (UniqueName: \"kubernetes.io/projected/92339196-3d33-4b76-9ba2-81e1a8373e84-kube-api-access-h7dw7\") pod \"horizon-7dc4db5bfb-zbs4l\" (UID: \"92339196-3d33-4b76-9ba2-81e1a8373e84\") " pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.356196 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:04:46 crc kubenswrapper[4625]: I1202 14:04:46.777856 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c94877878-jvhxv"] Dec 02 14:04:47 crc kubenswrapper[4625]: I1202 14:04:47.635525 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:04:47 crc kubenswrapper[4625]: I1202 14:04:47.666431 4625 generic.go:334] "Generic (PLEG): container finished" podID="2dcf8f03-f5f2-46ea-964d-07412aee459d" containerID="dde77d1e0c32452a78e4539fea8157b9f734116b752418838b9767f2d7e247bd" exitCode=0 Dec 02 14:04:47 crc kubenswrapper[4625]: I1202 14:04:47.666491 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8prn" event={"ID":"2dcf8f03-f5f2-46ea-964d-07412aee459d","Type":"ContainerDied","Data":"dde77d1e0c32452a78e4539fea8157b9f734116b752418838b9767f2d7e247bd"} Dec 02 14:04:47 crc kubenswrapper[4625]: I1202 14:04:47.749794 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-72k88"] Dec 02 14:04:47 crc kubenswrapper[4625]: I1202 14:04:47.754025 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-72k88" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerName="dnsmasq-dns" containerID="cri-o://42475adc2afe6413daadfca967f6430024b3163474318d831113c1c253391c87" gracePeriod=10 Dec 02 14:04:48 crc kubenswrapper[4625]: I1202 14:04:48.712927 4625 generic.go:334] "Generic (PLEG): container finished" podID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerID="42475adc2afe6413daadfca967f6430024b3163474318d831113c1c253391c87" exitCode=0 Dec 02 14:04:48 crc kubenswrapper[4625]: I1202 14:04:48.713510 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-72k88" event={"ID":"c5dab0a9-6481-4ef3-9462-c9a26c451ba9","Type":"ContainerDied","Data":"42475adc2afe6413daadfca967f6430024b3163474318d831113c1c253391c87"} Dec 02 14:04:49 crc kubenswrapper[4625]: I1202 14:04:49.275927 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:04:49 crc kubenswrapper[4625]: I1202 14:04:49.276406 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:04:49 crc kubenswrapper[4625]: I1202 14:04:49.276469 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:04:49 crc kubenswrapper[4625]: I1202 14:04:49.277503 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5271eaf0b8b85861d7c190af249c8999cbc2c292aa3724e0a85121cbb59f2516"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:04:49 crc kubenswrapper[4625]: I1202 14:04:49.277562 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://5271eaf0b8b85861d7c190af249c8999cbc2c292aa3724e0a85121cbb59f2516" gracePeriod=600 Dec 02 14:04:49 crc kubenswrapper[4625]: I1202 14:04:49.732512 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="5271eaf0b8b85861d7c190af249c8999cbc2c292aa3724e0a85121cbb59f2516" exitCode=0 Dec 02 14:04:49 crc kubenswrapper[4625]: I1202 14:04:49.732602 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"5271eaf0b8b85861d7c190af249c8999cbc2c292aa3724e0a85121cbb59f2516"} Dec 02 14:04:49 crc kubenswrapper[4625]: I1202 14:04:49.732690 4625 scope.go:117] "RemoveContainer" containerID="26c37d19f3fe7a2800125178b96518c47f7905764a81c00a7c86f8da62aaaa2f" Dec 02 14:04:51 crc kubenswrapper[4625]: I1202 14:04:51.319961 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-72k88" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.836832 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c94877878-jvhxv" event={"ID":"04b6d9a8-9eed-441e-a627-83774df65ed9","Type":"ContainerStarted","Data":"e6332762322b607966e6e5a32e31d491921997bba804b1194d69770519487cc9"} Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.842362 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8prn" event={"ID":"2dcf8f03-f5f2-46ea-964d-07412aee459d","Type":"ContainerDied","Data":"f4f329d036c962c4d9cce85c60078f79efb21501a47ec269915846ee99bb6cc0"} Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.842406 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4f329d036c962c4d9cce85c60078f79efb21501a47ec269915846ee99bb6cc0" Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.872387 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.989339 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-fernet-keys\") pod \"2dcf8f03-f5f2-46ea-964d-07412aee459d\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.989430 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqjwv\" (UniqueName: \"kubernetes.io/projected/2dcf8f03-f5f2-46ea-964d-07412aee459d-kube-api-access-tqjwv\") pod \"2dcf8f03-f5f2-46ea-964d-07412aee459d\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.989505 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-scripts\") pod \"2dcf8f03-f5f2-46ea-964d-07412aee459d\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.989622 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-credential-keys\") pod \"2dcf8f03-f5f2-46ea-964d-07412aee459d\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.989687 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-config-data\") pod \"2dcf8f03-f5f2-46ea-964d-07412aee459d\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.989721 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-combined-ca-bundle\") pod \"2dcf8f03-f5f2-46ea-964d-07412aee459d\" (UID: \"2dcf8f03-f5f2-46ea-964d-07412aee459d\") " Dec 02 14:04:53 crc kubenswrapper[4625]: I1202 14:04:53.999403 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-scripts" (OuterVolumeSpecName: "scripts") pod "2dcf8f03-f5f2-46ea-964d-07412aee459d" (UID: "2dcf8f03-f5f2-46ea-964d-07412aee459d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.000220 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2dcf8f03-f5f2-46ea-964d-07412aee459d" (UID: "2dcf8f03-f5f2-46ea-964d-07412aee459d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.017745 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2dcf8f03-f5f2-46ea-964d-07412aee459d" (UID: "2dcf8f03-f5f2-46ea-964d-07412aee459d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.028948 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dcf8f03-f5f2-46ea-964d-07412aee459d-kube-api-access-tqjwv" (OuterVolumeSpecName: "kube-api-access-tqjwv") pod "2dcf8f03-f5f2-46ea-964d-07412aee459d" (UID: "2dcf8f03-f5f2-46ea-964d-07412aee459d"). InnerVolumeSpecName "kube-api-access-tqjwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.051822 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dcf8f03-f5f2-46ea-964d-07412aee459d" (UID: "2dcf8f03-f5f2-46ea-964d-07412aee459d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.053060 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-config-data" (OuterVolumeSpecName: "config-data") pod "2dcf8f03-f5f2-46ea-964d-07412aee459d" (UID: "2dcf8f03-f5f2-46ea-964d-07412aee459d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.092884 4625 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.092927 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqjwv\" (UniqueName: \"kubernetes.io/projected/2dcf8f03-f5f2-46ea-964d-07412aee459d-kube-api-access-tqjwv\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.092938 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.092947 4625 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.092957 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.092965 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcf8f03-f5f2-46ea-964d-07412aee459d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:04:54 crc kubenswrapper[4625]: I1202 14:04:54.856743 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8prn" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.037968 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p8prn"] Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.047433 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p8prn"] Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.136261 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-w2qlr"] Dec 02 14:04:55 crc kubenswrapper[4625]: E1202 14:04:55.136962 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dcf8f03-f5f2-46ea-964d-07412aee459d" containerName="keystone-bootstrap" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.136984 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dcf8f03-f5f2-46ea-964d-07412aee459d" containerName="keystone-bootstrap" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.137158 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dcf8f03-f5f2-46ea-964d-07412aee459d" containerName="keystone-bootstrap" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.145004 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.151109 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.151379 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v2szz" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.156992 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.157199 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.157371 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.169524 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w2qlr"] Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.246243 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-combined-ca-bundle\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.246338 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-scripts\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.246371 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8lfc\" (UniqueName: \"kubernetes.io/projected/57d79055-dea2-4cd4-a642-b63d2deaa339-kube-api-access-w8lfc\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.246406 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-fernet-keys\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.246446 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-credential-keys\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.246472 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-config-data\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: E1202 14:04:55.296992 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dcf8f03_f5f2_46ea_964d_07412aee459d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dcf8f03_f5f2_46ea_964d_07412aee459d.slice/crio-f4f329d036c962c4d9cce85c60078f79efb21501a47ec269915846ee99bb6cc0\": RecentStats: unable to find data in memory cache]" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.347918 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-combined-ca-bundle\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.347998 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-scripts\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.348036 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8lfc\" (UniqueName: \"kubernetes.io/projected/57d79055-dea2-4cd4-a642-b63d2deaa339-kube-api-access-w8lfc\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.348071 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-fernet-keys\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.348107 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-credential-keys\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.348128 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-config-data\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.359892 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-fernet-keys\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.364215 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-combined-ca-bundle\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.364608 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-scripts\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.386963 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-credential-keys\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.389099 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-config-data\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.410623 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8lfc\" (UniqueName: \"kubernetes.io/projected/57d79055-dea2-4cd4-a642-b63d2deaa339-kube-api-access-w8lfc\") pod \"keystone-bootstrap-w2qlr\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:55 crc kubenswrapper[4625]: I1202 14:04:55.503263 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:04:56 crc kubenswrapper[4625]: I1202 14:04:56.869566 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dcf8f03-f5f2-46ea-964d-07412aee459d" path="/var/lib/kubelet/pods/2dcf8f03-f5f2-46ea-964d-07412aee459d/volumes" Dec 02 14:04:59 crc kubenswrapper[4625]: E1202 14:04:59.615637 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 02 14:04:59 crc kubenswrapper[4625]: E1202 14:04:59.616417 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r27b6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-2swhw_openstack(d7887abf-7df6-4058-b3f0-e58295b168c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:04:59 crc kubenswrapper[4625]: E1202 14:04:59.617567 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-2swhw" podUID="d7887abf-7df6-4058-b3f0-e58295b168c1" Dec 02 14:04:59 crc kubenswrapper[4625]: E1202 14:04:59.905516 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-2swhw" podUID="d7887abf-7df6-4058-b3f0-e58295b168c1" Dec 02 14:05:01 crc kubenswrapper[4625]: I1202 14:05:01.320292 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-72k88" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Dec 02 14:05:02 crc kubenswrapper[4625]: I1202 14:05:02.944802 4625 generic.go:334] "Generic (PLEG): container finished" podID="314f653d-9ec6-47e4-af2a-aadc2440d332" containerID="d0493e0f0a3c8678839326776275f5db0684b9f0dfa81b8729b8bc8fc7e290d8" exitCode=0 Dec 02 14:05:02 crc kubenswrapper[4625]: I1202 14:05:02.944861 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6xqcm" event={"ID":"314f653d-9ec6-47e4-af2a-aadc2440d332","Type":"ContainerDied","Data":"d0493e0f0a3c8678839326776275f5db0684b9f0dfa81b8729b8bc8fc7e290d8"} Dec 02 14:05:06 crc kubenswrapper[4625]: I1202 14:05:06.322088 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-72k88" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Dec 02 14:05:06 crc kubenswrapper[4625]: I1202 14:05:06.324452 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.004233 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.004498 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b4h5d4h66bh8ch69h599h666h558h7hcch5b5h67dhcdh79h7hf5h54ch598h5b7hc6hfch587h7fh687h58dh5b6h678h67ch6fh8ch555h5f9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sf9k4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6b6bf499d5-zkn4m_openstack(ac678a62-6d5a-4548-94ff-384289748b18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.012926 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6b6bf499d5-zkn4m" podUID="ac678a62-6d5a-4548-94ff-384289748b18" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.013883 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.014031 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhfch55fhcdh5fch595h68chdch5bh65h59h5c5h66dh564h5d4h569h5dfh64ch5f9h557hd8h644h589h5c9h5d9h76hb6h54fh564h5b7h667h78q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btcvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5c94877878-jvhxv_openstack(04b6d9a8-9eed-441e-a627-83774df65ed9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.127035 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.154284 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.155029 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57ch546hffh5bch57fhf6h575h68bh694h55fh66dh68ch58dh5d4h5fdh5fch588h5cch9dh68dh6bh676h95hd6h595h64chd5h65h57bh565h5c8h8bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gmh6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6cdb94fc6c-t5c7p_openstack(52ce2aac-6c7b-4458-b4b8-b1467eb43de0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.159579 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6cdb94fc6c-t5c7p" podUID="52ce2aac-6c7b-4458-b4b8-b1467eb43de0" Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.159816 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-72k88" event={"ID":"c5dab0a9-6481-4ef3-9462-c9a26c451ba9","Type":"ContainerDied","Data":"b5dfc98ffdbfe98067a446a7001b82087d87939fdf5a41d22242ebef44b2b068"} Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.159850 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5dfc98ffdbfe98067a446a7001b82087d87939fdf5a41d22242ebef44b2b068" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.169677 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.169971 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n96h5c9h54ch87h559h664h666h96h568h5d5h5c7h669h664h59ch574h579h86hc6h546h646h679h98h66fh5b9h569h5b5h599h575h5fch95h67ch9bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q22zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-8ff9c589c-cnghx_openstack(d6b23945-e07f-4c02-9cb4-1515c58af99b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:05:07 crc kubenswrapper[4625]: E1202 14:05:07.174887 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-8ff9c589c-cnghx" podUID="d6b23945-e07f-4c02-9cb4-1515c58af99b" Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.280711 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.434886 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-nb\") pod \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.434974 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-dns-svc\") pod \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.435015 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwcjp\" (UniqueName: \"kubernetes.io/projected/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-kube-api-access-jwcjp\") pod \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.435137 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-sb\") pod \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.435201 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-config\") pod \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\" (UID: \"c5dab0a9-6481-4ef3-9462-c9a26c451ba9\") " Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.444393 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-kube-api-access-jwcjp" (OuterVolumeSpecName: "kube-api-access-jwcjp") pod "c5dab0a9-6481-4ef3-9462-c9a26c451ba9" (UID: "c5dab0a9-6481-4ef3-9462-c9a26c451ba9"). InnerVolumeSpecName "kube-api-access-jwcjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.712267 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwcjp\" (UniqueName: \"kubernetes.io/projected/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-kube-api-access-jwcjp\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.796853 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5dab0a9-6481-4ef3-9462-c9a26c451ba9" (UID: "c5dab0a9-6481-4ef3-9462-c9a26c451ba9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.820658 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.941095 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5dab0a9-6481-4ef3-9462-c9a26c451ba9" (UID: "c5dab0a9-6481-4ef3-9462-c9a26c451ba9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:07 crc kubenswrapper[4625]: I1202 14:05:07.950931 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5dab0a9-6481-4ef3-9462-c9a26c451ba9" (UID: "c5dab0a9-6481-4ef3-9462-c9a26c451ba9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:08 crc kubenswrapper[4625]: I1202 14:05:08.020094 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-config" (OuterVolumeSpecName: "config") pod "c5dab0a9-6481-4ef3-9462-c9a26c451ba9" (UID: "c5dab0a9-6481-4ef3-9462-c9a26c451ba9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:08 crc kubenswrapper[4625]: I1202 14:05:08.036142 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:08 crc kubenswrapper[4625]: I1202 14:05:08.036193 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:08 crc kubenswrapper[4625]: I1202 14:05:08.036205 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5dab0a9-6481-4ef3-9462-c9a26c451ba9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:08 crc kubenswrapper[4625]: I1202 14:05:08.103752 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dc4db5bfb-zbs4l"] Dec 02 14:05:08 crc kubenswrapper[4625]: I1202 14:05:08.190828 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-72k88" Dec 02 14:05:08 crc kubenswrapper[4625]: E1202 14:05:08.204621 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" Dec 02 14:05:08 crc kubenswrapper[4625]: I1202 14:05:08.345414 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-72k88"] Dec 02 14:05:08 crc kubenswrapper[4625]: I1202 14:05:08.355965 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-72k88"] Dec 02 14:05:08 crc kubenswrapper[4625]: I1202 14:05:08.870432 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" path="/var/lib/kubelet/pods/c5dab0a9-6481-4ef3-9462-c9a26c451ba9/volumes" Dec 02 14:05:10 crc kubenswrapper[4625]: I1202 14:05:10.215229 4625 generic.go:334] "Generic (PLEG): container finished" podID="506784cb-9737-438b-bd53-4719527b47bf" containerID="631d79035e0e3fd8d298c70b03579590f9ba71682e1ded470cdf3dc32d86f038" exitCode=0 Dec 02 14:05:10 crc kubenswrapper[4625]: I1202 14:05:10.215292 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tpfs8" event={"ID":"506784cb-9737-438b-bd53-4719527b47bf","Type":"ContainerDied","Data":"631d79035e0e3fd8d298c70b03579590f9ba71682e1ded470cdf3dc32d86f038"} Dec 02 14:05:11 crc kubenswrapper[4625]: I1202 14:05:11.327024 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-72k88" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Dec 02 14:05:22 crc kubenswrapper[4625]: E1202 14:05:22.981680 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 02 14:05:22 crc kubenswrapper[4625]: E1202 14:05:22.982743 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65ch69h597h9fhd5h86h685h5h596hcch579h65dh54fh654h66fhbhd4h594h55fh9dh644h5b9h548h598h5dh5f8h67bh66ch5d4h5b9hb6h59q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgpgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(abbd3215-4ced-473b-84a7-1f859e2782b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.102959 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6xqcm" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.114862 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.156323 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.167112 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.202082 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-combined-ca-bundle\") pod \"314f653d-9ec6-47e4-af2a-aadc2440d332\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203285 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfx54\" (UniqueName: \"kubernetes.io/projected/506784cb-9737-438b-bd53-4719527b47bf-kube-api-access-nfx54\") pod \"506784cb-9737-438b-bd53-4719527b47bf\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203348 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-config-data\") pod \"314f653d-9ec6-47e4-af2a-aadc2440d332\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203413 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-config\") pod \"506784cb-9737-438b-bd53-4719527b47bf\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203433 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-config-data\") pod \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203459 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-scripts\") pod \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203500 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-horizon-secret-key\") pod \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203532 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-config-data\") pod \"ac678a62-6d5a-4548-94ff-384289748b18\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203587 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmh6b\" (UniqueName: \"kubernetes.io/projected/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-kube-api-access-gmh6b\") pod \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203619 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-logs\") pod \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\" (UID: \"52ce2aac-6c7b-4458-b4b8-b1467eb43de0\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203655 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-scripts\") pod \"ac678a62-6d5a-4548-94ff-384289748b18\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203678 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac678a62-6d5a-4548-94ff-384289748b18-logs\") pod \"ac678a62-6d5a-4548-94ff-384289748b18\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203698 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac678a62-6d5a-4548-94ff-384289748b18-horizon-secret-key\") pod \"ac678a62-6d5a-4548-94ff-384289748b18\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203719 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-db-sync-config-data\") pod \"314f653d-9ec6-47e4-af2a-aadc2440d332\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203760 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf9k4\" (UniqueName: \"kubernetes.io/projected/ac678a62-6d5a-4548-94ff-384289748b18-kube-api-access-sf9k4\") pod \"ac678a62-6d5a-4548-94ff-384289748b18\" (UID: \"ac678a62-6d5a-4548-94ff-384289748b18\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203799 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cps2\" (UniqueName: \"kubernetes.io/projected/314f653d-9ec6-47e4-af2a-aadc2440d332-kube-api-access-5cps2\") pod \"314f653d-9ec6-47e4-af2a-aadc2440d332\" (UID: \"314f653d-9ec6-47e4-af2a-aadc2440d332\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.203924 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-combined-ca-bundle\") pod \"506784cb-9737-438b-bd53-4719527b47bf\" (UID: \"506784cb-9737-438b-bd53-4719527b47bf\") " Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.208377 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-config-data" (OuterVolumeSpecName: "config-data") pod "52ce2aac-6c7b-4458-b4b8-b1467eb43de0" (UID: "52ce2aac-6c7b-4458-b4b8-b1467eb43de0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.216207 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "52ce2aac-6c7b-4458-b4b8-b1467eb43de0" (UID: "52ce2aac-6c7b-4458-b4b8-b1467eb43de0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.216734 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-scripts" (OuterVolumeSpecName: "scripts") pod "52ce2aac-6c7b-4458-b4b8-b1467eb43de0" (UID: "52ce2aac-6c7b-4458-b4b8-b1467eb43de0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.217931 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac678a62-6d5a-4548-94ff-384289748b18-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ac678a62-6d5a-4548-94ff-384289748b18" (UID: "ac678a62-6d5a-4548-94ff-384289748b18"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.218857 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-logs" (OuterVolumeSpecName: "logs") pod "52ce2aac-6c7b-4458-b4b8-b1467eb43de0" (UID: "52ce2aac-6c7b-4458-b4b8-b1467eb43de0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.222434 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac678a62-6d5a-4548-94ff-384289748b18-logs" (OuterVolumeSpecName: "logs") pod "ac678a62-6d5a-4548-94ff-384289748b18" (UID: "ac678a62-6d5a-4548-94ff-384289748b18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.222559 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-config-data" (OuterVolumeSpecName: "config-data") pod "ac678a62-6d5a-4548-94ff-384289748b18" (UID: "ac678a62-6d5a-4548-94ff-384289748b18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.222645 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-scripts" (OuterVolumeSpecName: "scripts") pod "ac678a62-6d5a-4548-94ff-384289748b18" (UID: "ac678a62-6d5a-4548-94ff-384289748b18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.223397 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-kube-api-access-gmh6b" (OuterVolumeSpecName: "kube-api-access-gmh6b") pod "52ce2aac-6c7b-4458-b4b8-b1467eb43de0" (UID: "52ce2aac-6c7b-4458-b4b8-b1467eb43de0"). InnerVolumeSpecName "kube-api-access-gmh6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.224057 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac678a62-6d5a-4548-94ff-384289748b18-kube-api-access-sf9k4" (OuterVolumeSpecName: "kube-api-access-sf9k4") pod "ac678a62-6d5a-4548-94ff-384289748b18" (UID: "ac678a62-6d5a-4548-94ff-384289748b18"). InnerVolumeSpecName "kube-api-access-sf9k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.232019 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/314f653d-9ec6-47e4-af2a-aadc2440d332-kube-api-access-5cps2" (OuterVolumeSpecName: "kube-api-access-5cps2") pod "314f653d-9ec6-47e4-af2a-aadc2440d332" (UID: "314f653d-9ec6-47e4-af2a-aadc2440d332"). InnerVolumeSpecName "kube-api-access-5cps2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.232604 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "314f653d-9ec6-47e4-af2a-aadc2440d332" (UID: "314f653d-9ec6-47e4-af2a-aadc2440d332"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.248616 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506784cb-9737-438b-bd53-4719527b47bf-kube-api-access-nfx54" (OuterVolumeSpecName: "kube-api-access-nfx54") pod "506784cb-9737-438b-bd53-4719527b47bf" (UID: "506784cb-9737-438b-bd53-4719527b47bf"). InnerVolumeSpecName "kube-api-access-nfx54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.250724 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-config" (OuterVolumeSpecName: "config") pod "506784cb-9737-438b-bd53-4719527b47bf" (UID: "506784cb-9737-438b-bd53-4719527b47bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.269320 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "506784cb-9737-438b-bd53-4719527b47bf" (UID: "506784cb-9737-438b-bd53-4719527b47bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.275130 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "314f653d-9ec6-47e4-af2a-aadc2440d332" (UID: "314f653d-9ec6-47e4-af2a-aadc2440d332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.298012 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-config-data" (OuterVolumeSpecName: "config-data") pod "314f653d-9ec6-47e4-af2a-aadc2440d332" (UID: "314f653d-9ec6-47e4-af2a-aadc2440d332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307163 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307206 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfx54\" (UniqueName: \"kubernetes.io/projected/506784cb-9737-438b-bd53-4719527b47bf-kube-api-access-nfx54\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307220 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307294 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307321 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307330 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307341 4625 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307350 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307359 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmh6b\" (UniqueName: \"kubernetes.io/projected/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-kube-api-access-gmh6b\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307367 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ce2aac-6c7b-4458-b4b8-b1467eb43de0-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307378 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac678a62-6d5a-4548-94ff-384289748b18-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307386 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac678a62-6d5a-4548-94ff-384289748b18-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307395 4625 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac678a62-6d5a-4548-94ff-384289748b18-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307405 4625 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/314f653d-9ec6-47e4-af2a-aadc2440d332-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307415 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf9k4\" (UniqueName: \"kubernetes.io/projected/ac678a62-6d5a-4548-94ff-384289748b18-kube-api-access-sf9k4\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307423 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cps2\" (UniqueName: \"kubernetes.io/projected/314f653d-9ec6-47e4-af2a-aadc2440d332-kube-api-access-5cps2\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.307433 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506784cb-9737-438b-bd53-4719527b47bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.931405 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tpfs8" event={"ID":"506784cb-9737-438b-bd53-4719527b47bf","Type":"ContainerDied","Data":"53d96f40596e541c9b2ca89a76912bcc3c765370da063cc520029b960a718c48"} Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.932083 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d96f40596e541c9b2ca89a76912bcc3c765370da063cc520029b960a718c48" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.931943 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tpfs8" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.935101 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc4db5bfb-zbs4l" event={"ID":"92339196-3d33-4b76-9ba2-81e1a8373e84","Type":"ContainerStarted","Data":"0bc162673154854f6e55cd4af1d082c3c6eeff96b122929be4bb6b5ae48453df"} Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.938628 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b6bf499d5-zkn4m" event={"ID":"ac678a62-6d5a-4548-94ff-384289748b18","Type":"ContainerDied","Data":"edc4e2dfa5a28627909057c3a23f5780bbc89469b50786cd2c9a30ca22e002bd"} Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.938720 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b6bf499d5-zkn4m" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.946459 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6xqcm" event={"ID":"314f653d-9ec6-47e4-af2a-aadc2440d332","Type":"ContainerDied","Data":"5e1b5e73f961c74fd875ce0537b62064b3ce336406fb6901fe3325059e9480c5"} Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.946525 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e1b5e73f961c74fd875ce0537b62064b3ce336406fb6901fe3325059e9480c5" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.946536 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6xqcm" Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.956177 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdb94fc6c-t5c7p" event={"ID":"52ce2aac-6c7b-4458-b4b8-b1467eb43de0","Type":"ContainerDied","Data":"dc8afc995f2b611dffc314779825532645d76ae8c8be4cb3a12517ce119dd37f"} Dec 02 14:05:23 crc kubenswrapper[4625]: I1202 14:05:23.956250 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdb94fc6c-t5c7p" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.088760 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b6bf499d5-zkn4m"] Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.102689 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b6bf499d5-zkn4m"] Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.124745 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cdb94fc6c-t5c7p"] Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.148956 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6cdb94fc6c-t5c7p"] Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.448548 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-hxbrp"] Dec 02 14:05:24 crc kubenswrapper[4625]: E1202 14:05:24.449064 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerName="dnsmasq-dns" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.449101 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerName="dnsmasq-dns" Dec 02 14:05:24 crc kubenswrapper[4625]: E1202 14:05:24.449131 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerName="init" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.449139 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerName="init" Dec 02 14:05:24 crc kubenswrapper[4625]: E1202 14:05:24.449149 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="314f653d-9ec6-47e4-af2a-aadc2440d332" containerName="glance-db-sync" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.449156 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="314f653d-9ec6-47e4-af2a-aadc2440d332" containerName="glance-db-sync" Dec 02 14:05:24 crc kubenswrapper[4625]: E1202 14:05:24.449170 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506784cb-9737-438b-bd53-4719527b47bf" containerName="neutron-db-sync" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.449175 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="506784cb-9737-438b-bd53-4719527b47bf" containerName="neutron-db-sync" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.449430 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="314f653d-9ec6-47e4-af2a-aadc2440d332" containerName="glance-db-sync" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.449454 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5dab0a9-6481-4ef3-9462-c9a26c451ba9" containerName="dnsmasq-dns" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.449481 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="506784cb-9737-438b-bd53-4719527b47bf" containerName="neutron-db-sync" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.450719 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.480128 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-hxbrp"] Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.564672 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-config\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.564743 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmzk\" (UniqueName: \"kubernetes.io/projected/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-kube-api-access-zbmzk\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.564774 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.564810 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.564837 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.564876 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.666296 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-config\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.666408 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbmzk\" (UniqueName: \"kubernetes.io/projected/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-kube-api-access-zbmzk\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.666449 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.666487 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.666750 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.666804 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.667920 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.667920 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-config\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.668091 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.668752 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.671450 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.709020 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbmzk\" (UniqueName: \"kubernetes.io/projected/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-kube-api-access-zbmzk\") pod \"dnsmasq-dns-7d88d7b95f-hxbrp\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.800405 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.810337 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55fdff466d-bbrr5"] Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.812092 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.816027 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.816276 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.816417 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.838191 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tmhd7" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.907863 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-httpd-config\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.908414 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-ovndb-tls-certs\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.908572 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlsr8\" (UniqueName: \"kubernetes.io/projected/e595879c-342f-410e-9ad5-b60498125c2e-kube-api-access-jlsr8\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.908952 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-config\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:24 crc kubenswrapper[4625]: I1202 14:05:24.909060 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-combined-ca-bundle\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.117905 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-httpd-config\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.118050 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-ovndb-tls-certs\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.118109 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlsr8\" (UniqueName: \"kubernetes.io/projected/e595879c-342f-410e-9ad5-b60498125c2e-kube-api-access-jlsr8\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.118134 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-config\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.118167 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-combined-ca-bundle\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.162586 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ce2aac-6c7b-4458-b4b8-b1467eb43de0" path="/var/lib/kubelet/pods/52ce2aac-6c7b-4458-b4b8-b1467eb43de0/volumes" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.172908 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-combined-ca-bundle\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.173215 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-ovndb-tls-certs\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.186289 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-httpd-config\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.196618 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-config\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.209613 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlsr8\" (UniqueName: \"kubernetes.io/projected/e595879c-342f-410e-9ad5-b60498125c2e-kube-api-access-jlsr8\") pod \"neutron-55fdff466d-bbrr5\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.210382 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac678a62-6d5a-4548-94ff-384289748b18" path="/var/lib/kubelet/pods/ac678a62-6d5a-4548-94ff-384289748b18/volumes" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.244061 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55fdff466d-bbrr5"] Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.244125 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-hxbrp"] Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.249252 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.265671 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vz4hz"] Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.268593 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.302231 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vz4hz"] Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.427049 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.427139 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.427186 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.427217 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-config\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.427246 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.427270 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt72m\" (UniqueName: \"kubernetes.io/projected/2becaaf4-3b57-4612-b20e-ef1b93b563d1-kube-api-access-qt72m\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.529228 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.529357 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.529415 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-config\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.529465 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.529495 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt72m\" (UniqueName: \"kubernetes.io/projected/2becaaf4-3b57-4612-b20e-ef1b93b563d1-kube-api-access-qt72m\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.529631 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.531284 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.531355 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.534458 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.534458 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.534534 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-config\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.571249 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt72m\" (UniqueName: \"kubernetes.io/projected/2becaaf4-3b57-4612-b20e-ef1b93b563d1-kube-api-access-qt72m\") pod \"dnsmasq-dns-55f844cf75-vz4hz\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.613109 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.880165 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.882851 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.885703 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d2cmn" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.886491 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.886503 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 14:05:25 crc kubenswrapper[4625]: I1202 14:05:25.908755 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.039644 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.039756 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-logs\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.039791 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.039823 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.039861 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.039903 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.039930 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvwtr\" (UniqueName: \"kubernetes.io/projected/bb409327-65b9-4713-ab91-8a9ed9c84821-kube-api-access-fvwtr\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.141726 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.141835 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-logs\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.141869 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.141901 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.141937 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.141979 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.142023 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvwtr\" (UniqueName: \"kubernetes.io/projected/bb409327-65b9-4713-ab91-8a9ed9c84821-kube-api-access-fvwtr\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.142988 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.149706 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.151125 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-logs\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.157618 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.162467 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.189758 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvwtr\" (UniqueName: \"kubernetes.io/projected/bb409327-65b9-4713-ab91-8a9ed9c84821-kube-api-access-fvwtr\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.194654 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.201004 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.222931 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: E1202 14:05:26.389079 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 02 14:05:26 crc kubenswrapper[4625]: E1202 14:05:26.389389 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g65z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-blvbp_openstack(c29ce362-3978-4713-833d-49aab29a394c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:05:26 crc kubenswrapper[4625]: E1202 14:05:26.391944 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-blvbp" podUID="c29ce362-3978-4713-833d-49aab29a394c" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.506686 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.511135 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.523936 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.531366 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.562677 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.656341 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-scripts\") pod \"d6b23945-e07f-4c02-9cb4-1515c58af99b\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.656580 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q22zt\" (UniqueName: \"kubernetes.io/projected/d6b23945-e07f-4c02-9cb4-1515c58af99b-kube-api-access-q22zt\") pod \"d6b23945-e07f-4c02-9cb4-1515c58af99b\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.656732 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b23945-e07f-4c02-9cb4-1515c58af99b-logs\") pod \"d6b23945-e07f-4c02-9cb4-1515c58af99b\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.656768 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6b23945-e07f-4c02-9cb4-1515c58af99b-horizon-secret-key\") pod \"d6b23945-e07f-4c02-9cb4-1515c58af99b\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.656818 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-config-data\") pod \"d6b23945-e07f-4c02-9cb4-1515c58af99b\" (UID: \"d6b23945-e07f-4c02-9cb4-1515c58af99b\") " Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.657145 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.657172 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.657196 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b23945-e07f-4c02-9cb4-1515c58af99b-logs" (OuterVolumeSpecName: "logs") pod "d6b23945-e07f-4c02-9cb4-1515c58af99b" (UID: "d6b23945-e07f-4c02-9cb4-1515c58af99b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.657331 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.657357 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-logs\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.657398 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8n7\" (UniqueName: \"kubernetes.io/projected/3e953237-032c-443c-bb18-86369f783b77-kube-api-access-vd8n7\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.657477 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.657512 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.657942 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-config-data" (OuterVolumeSpecName: "config-data") pod "d6b23945-e07f-4c02-9cb4-1515c58af99b" (UID: "d6b23945-e07f-4c02-9cb4-1515c58af99b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.657975 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b23945-e07f-4c02-9cb4-1515c58af99b-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.658045 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-scripts" (OuterVolumeSpecName: "scripts") pod "d6b23945-e07f-4c02-9cb4-1515c58af99b" (UID: "d6b23945-e07f-4c02-9cb4-1515c58af99b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.661675 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b23945-e07f-4c02-9cb4-1515c58af99b-kube-api-access-q22zt" (OuterVolumeSpecName: "kube-api-access-q22zt") pod "d6b23945-e07f-4c02-9cb4-1515c58af99b" (UID: "d6b23945-e07f-4c02-9cb4-1515c58af99b"). InnerVolumeSpecName "kube-api-access-q22zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.674793 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b23945-e07f-4c02-9cb4-1515c58af99b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d6b23945-e07f-4c02-9cb4-1515c58af99b" (UID: "d6b23945-e07f-4c02-9cb4-1515c58af99b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.762847 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.763466 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.763621 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.763686 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.763714 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.763778 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-logs\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.763823 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8n7\" (UniqueName: \"kubernetes.io/projected/3e953237-032c-443c-bb18-86369f783b77-kube-api-access-vd8n7\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.763946 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.763950 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.763963 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q22zt\" (UniqueName: \"kubernetes.io/projected/d6b23945-e07f-4c02-9cb4-1515c58af99b-kube-api-access-q22zt\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.764026 4625 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6b23945-e07f-4c02-9cb4-1515c58af99b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.764042 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6b23945-e07f-4c02-9cb4-1515c58af99b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.764086 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.765658 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-logs\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.771851 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.778793 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.773468 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.809581 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8n7\" (UniqueName: \"kubernetes.io/projected/3e953237-032c-443c-bb18-86369f783b77-kube-api-access-vd8n7\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.878512 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:26 crc kubenswrapper[4625]: I1202 14:05:26.930528 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:27 crc kubenswrapper[4625]: I1202 14:05:27.434614 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"22eacb360cbd64994ad7dde3fa2964df2620c7bf593d351571346615fdf674ec"} Dec 02 14:05:27 crc kubenswrapper[4625]: I1202 14:05:27.444976 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8ff9c589c-cnghx" Dec 02 14:05:27 crc kubenswrapper[4625]: I1202 14:05:27.445215 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8ff9c589c-cnghx" event={"ID":"d6b23945-e07f-4c02-9cb4-1515c58af99b","Type":"ContainerDied","Data":"c71689a409db98c560670f8b4ddea4bfd51aa60962606ca0f248cac8582b692b"} Dec 02 14:05:27 crc kubenswrapper[4625]: E1202 14:05:27.547696 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-blvbp" podUID="c29ce362-3978-4713-833d-49aab29a394c" Dec 02 14:05:27 crc kubenswrapper[4625]: I1202 14:05:27.662652 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8ff9c589c-cnghx"] Dec 02 14:05:27 crc kubenswrapper[4625]: I1202 14:05:27.712287 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8ff9c589c-cnghx"] Dec 02 14:05:27 crc kubenswrapper[4625]: I1202 14:05:27.809410 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-hxbrp"] Dec 02 14:05:27 crc kubenswrapper[4625]: I1202 14:05:27.871556 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w2qlr"] Dec 02 14:05:28 crc kubenswrapper[4625]: W1202 14:05:28.085890 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57d79055_dea2_4cd4_a642_b63d2deaa339.slice/crio-363819fb749abf0a8bf8e37d2eb3c8aa794f64054522f7f72a2b4737c7c67bd4 WatchSource:0}: Error finding container 363819fb749abf0a8bf8e37d2eb3c8aa794f64054522f7f72a2b4737c7c67bd4: Status 404 returned error can't find the container with id 363819fb749abf0a8bf8e37d2eb3c8aa794f64054522f7f72a2b4737c7c67bd4 Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.092083 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.210495 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vz4hz"] Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.239434 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55fdff466d-bbrr5"] Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.469586 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" event={"ID":"2becaaf4-3b57-4612-b20e-ef1b93b563d1","Type":"ContainerStarted","Data":"5ec9126f27d407eb7a156e009e0c1a3c6c8d496830ffae73d6bb07cee79d3348"} Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.485126 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fdkpr" event={"ID":"c2b840d2-7458-4769-9650-e62ff8676008","Type":"ContainerStarted","Data":"09a925c91fc8440516ea11a14fc8c3dfdcb74f05f26fc5cde8087f4c21ccbf41"} Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.507817 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" event={"ID":"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0","Type":"ContainerStarted","Data":"0522822b50732bef38cb22dcb5f457ca46794abe09925053c5e6112ddeaaabf4"} Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.524886 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fdkpr" podStartSLOduration=5.443810899 podStartE2EDuration="53.524851681s" podCreationTimestamp="2025-12-02 14:04:35 +0000 UTC" firstStartedPulling="2025-12-02 14:04:38.347480124 +0000 UTC m=+1234.309657199" lastFinishedPulling="2025-12-02 14:05:26.428520906 +0000 UTC m=+1282.390697981" observedRunningTime="2025-12-02 14:05:28.517927304 +0000 UTC m=+1284.480104379" watchObservedRunningTime="2025-12-02 14:05:28.524851681 +0000 UTC m=+1284.487028756" Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.529528 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w2qlr" event={"ID":"57d79055-dea2-4cd4-a642-b63d2deaa339","Type":"ContainerStarted","Data":"363819fb749abf0a8bf8e37d2eb3c8aa794f64054522f7f72a2b4737c7c67bd4"} Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.542437 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55fdff466d-bbrr5" event={"ID":"e595879c-342f-410e-9ad5-b60498125c2e","Type":"ContainerStarted","Data":"b71d8a8a0b8fa005c21131aaa239982655642516e2031c5148ce5dd5bc191503"} Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.547164 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.675615 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:28 crc kubenswrapper[4625]: I1202 14:05:28.877834 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b23945-e07f-4c02-9cb4-1515c58af99b" path="/var/lib/kubelet/pods/d6b23945-e07f-4c02-9cb4-1515c58af99b/volumes" Dec 02 14:05:29 crc kubenswrapper[4625]: I1202 14:05:29.701850 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb409327-65b9-4713-ab91-8a9ed9c84821","Type":"ContainerStarted","Data":"c9f0a593e2b74e8aecd04b4ace1efb849cad2e87751a703df5b2f6b4f68df887"} Dec 02 14:05:29 crc kubenswrapper[4625]: I1202 14:05:29.728672 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c94877878-jvhxv" event={"ID":"04b6d9a8-9eed-441e-a627-83774df65ed9","Type":"ContainerStarted","Data":"c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805"} Dec 02 14:05:29 crc kubenswrapper[4625]: I1202 14:05:29.758978 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55fdff466d-bbrr5" event={"ID":"e595879c-342f-410e-9ad5-b60498125c2e","Type":"ContainerStarted","Data":"d60ba7fb3fbbc9818305e854a7d5224071ef9ea372d4fe05ee68518c27c673db"} Dec 02 14:05:29 crc kubenswrapper[4625]: I1202 14:05:29.780737 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2swhw" event={"ID":"d7887abf-7df6-4058-b3f0-e58295b168c1","Type":"ContainerStarted","Data":"893172c1648c0029902621395892771df41fb07b84730fb9215235b0335e2c67"} Dec 02 14:05:29 crc kubenswrapper[4625]: I1202 14:05:29.805734 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c94877878-jvhxv" podStartSLOduration=10.733401506 podStartE2EDuration="44.80570612s" podCreationTimestamp="2025-12-02 14:04:45 +0000 UTC" firstStartedPulling="2025-12-02 14:04:52.826704112 +0000 UTC m=+1248.788881197" lastFinishedPulling="2025-12-02 14:05:26.899008746 +0000 UTC m=+1282.861185811" observedRunningTime="2025-12-02 14:05:29.758729233 +0000 UTC m=+1285.720906308" watchObservedRunningTime="2025-12-02 14:05:29.80570612 +0000 UTC m=+1285.767883195" Dec 02 14:05:29 crc kubenswrapper[4625]: I1202 14:05:29.844618 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2swhw" podStartSLOduration=7.911449365 podStartE2EDuration="53.84459203s" podCreationTimestamp="2025-12-02 14:04:36 +0000 UTC" firstStartedPulling="2025-12-02 14:04:40.850344225 +0000 UTC m=+1236.812521300" lastFinishedPulling="2025-12-02 14:05:26.78348689 +0000 UTC m=+1282.745663965" observedRunningTime="2025-12-02 14:05:29.813409878 +0000 UTC m=+1285.775586973" watchObservedRunningTime="2025-12-02 14:05:29.84459203 +0000 UTC m=+1285.806769105" Dec 02 14:05:29 crc kubenswrapper[4625]: I1202 14:05:29.847615 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc4db5bfb-zbs4l" event={"ID":"92339196-3d33-4b76-9ba2-81e1a8373e84","Type":"ContainerStarted","Data":"f09a9ec7ab1b52421671045b995b52e280b44ab8cdba40e3adf653a6ff3a530b"} Dec 02 14:05:29 crc kubenswrapper[4625]: I1202 14:05:29.870403 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3e953237-032c-443c-bb18-86369f783b77","Type":"ContainerStarted","Data":"fb01de7efe43e1a1133659ad14221db5605cbe8acfa64de64981bed7a582ba41"} Dec 02 14:05:30 crc kubenswrapper[4625]: I1202 14:05:30.938250 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c94877878-jvhxv" event={"ID":"04b6d9a8-9eed-441e-a627-83774df65ed9","Type":"ContainerStarted","Data":"f7d7aff050b1cd68f760459d9ee8066bf44a2756b77213a691265525e661240d"} Dec 02 14:05:30 crc kubenswrapper[4625]: I1202 14:05:30.946356 4625 generic.go:334] "Generic (PLEG): container finished" podID="2becaaf4-3b57-4612-b20e-ef1b93b563d1" containerID="d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8" exitCode=0 Dec 02 14:05:30 crc kubenswrapper[4625]: I1202 14:05:30.946445 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" event={"ID":"2becaaf4-3b57-4612-b20e-ef1b93b563d1","Type":"ContainerDied","Data":"d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8"} Dec 02 14:05:30 crc kubenswrapper[4625]: I1202 14:05:30.968342 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc4db5bfb-zbs4l" event={"ID":"92339196-3d33-4b76-9ba2-81e1a8373e84","Type":"ContainerStarted","Data":"ecf1871be89bb7259b3396f1b0d15bf2940dc1ca653cceb4173acdb58bbada5d"} Dec 02 14:05:31 crc kubenswrapper[4625]: I1202 14:05:31.000294 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3e953237-032c-443c-bb18-86369f783b77","Type":"ContainerStarted","Data":"76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2"} Dec 02 14:05:31 crc kubenswrapper[4625]: I1202 14:05:31.045749 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb409327-65b9-4713-ab91-8a9ed9c84821","Type":"ContainerStarted","Data":"0a3e9bf30b2c952a2ef87d5b43eeff8d5bcb60007f31ebf37cad839dae64e5c2"} Dec 02 14:05:31 crc kubenswrapper[4625]: I1202 14:05:31.051302 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7dc4db5bfb-zbs4l" podStartSLOduration=42.116994216 podStartE2EDuration="46.051268848s" podCreationTimestamp="2025-12-02 14:04:45 +0000 UTC" firstStartedPulling="2025-12-02 14:05:23.030407196 +0000 UTC m=+1278.992584271" lastFinishedPulling="2025-12-02 14:05:26.964681828 +0000 UTC m=+1282.926858903" observedRunningTime="2025-12-02 14:05:31.026591302 +0000 UTC m=+1286.988768387" watchObservedRunningTime="2025-12-02 14:05:31.051268848 +0000 UTC m=+1287.013445923" Dec 02 14:05:31 crc kubenswrapper[4625]: I1202 14:05:31.064159 4625 generic.go:334] "Generic (PLEG): container finished" podID="94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" containerID="3aa7b12d318135c37d9e76a59086941d7b8c0503b4c5169632e6a9200f01d4ab" exitCode=0 Dec 02 14:05:31 crc kubenswrapper[4625]: I1202 14:05:31.064352 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" event={"ID":"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0","Type":"ContainerDied","Data":"3aa7b12d318135c37d9e76a59086941d7b8c0503b4c5169632e6a9200f01d4ab"} Dec 02 14:05:31 crc kubenswrapper[4625]: I1202 14:05:31.099319 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w2qlr" event={"ID":"57d79055-dea2-4cd4-a642-b63d2deaa339","Type":"ContainerStarted","Data":"215f9812fe59267de0032da8a69efef7f944d128864bdbd4cf383ef1b0597e2e"} Dec 02 14:05:31 crc kubenswrapper[4625]: I1202 14:05:31.144436 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-w2qlr" podStartSLOduration=36.14440504 podStartE2EDuration="36.14440504s" podCreationTimestamp="2025-12-02 14:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:31.130294729 +0000 UTC m=+1287.092471804" watchObservedRunningTime="2025-12-02 14:05:31.14440504 +0000 UTC m=+1287.106582115" Dec 02 14:05:31 crc kubenswrapper[4625]: I1202 14:05:31.632340 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:31 crc kubenswrapper[4625]: I1202 14:05:31.837627 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:31 crc kubenswrapper[4625]: I1202 14:05:31.947357 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.002738 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8c746598f-ss7rg"] Dec 02 14:05:32 crc kubenswrapper[4625]: E1202 14:05:32.003402 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" containerName="init" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.003424 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" containerName="init" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.003646 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" containerName="init" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.003701 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-swift-storage-0\") pod \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.003790 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-nb\") pod \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.003874 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-config\") pod \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.003920 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbmzk\" (UniqueName: \"kubernetes.io/projected/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-kube-api-access-zbmzk\") pod \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.003976 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-sb\") pod \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.004137 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-svc\") pod \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\" (UID: \"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0\") " Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.004831 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.012397 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.012646 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.042629 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-kube-api-access-zbmzk" (OuterVolumeSpecName: "kube-api-access-zbmzk") pod "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" (UID: "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0"). InnerVolumeSpecName "kube-api-access-zbmzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.094388 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8c746598f-ss7rg"] Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.107643 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-config\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.107720 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qklp4\" (UniqueName: \"kubernetes.io/projected/f4477e45-6d29-4717-9168-8cf291295a40-kube-api-access-qklp4\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.107818 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-ovndb-tls-certs\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.108043 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-public-tls-certs\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.108130 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-combined-ca-bundle\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.108157 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-httpd-config\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.108256 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-internal-tls-certs\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.108373 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbmzk\" (UniqueName: \"kubernetes.io/projected/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-kube-api-access-zbmzk\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.154403 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" (UID: "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.163139 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" (UID: "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.164664 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" (UID: "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.172505 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd3215-4ced-473b-84a7-1f859e2782b2","Type":"ContainerStarted","Data":"c0b85fc85b6af33a966bd899d504149093149ae35bad48a55cf819efdccadc6f"} Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.179466 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.179570 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-hxbrp" event={"ID":"94fd9386-b9a5-4617-ba8d-e985bbfcf8f0","Type":"ContainerDied","Data":"0522822b50732bef38cb22dcb5f457ca46794abe09925053c5e6112ddeaaabf4"} Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.179629 4625 scope.go:117] "RemoveContainer" containerID="3aa7b12d318135c37d9e76a59086941d7b8c0503b4c5169632e6a9200f01d4ab" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.183179 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" (UID: "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.201399 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55fdff466d-bbrr5" event={"ID":"e595879c-342f-410e-9ad5-b60498125c2e","Type":"ContainerStarted","Data":"1f6e6931df756d52e2d5d628fa5ed4121b735dd954716b51a8e38e2024ecfbe1"} Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.201457 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.213794 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-config\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.213866 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qklp4\" (UniqueName: \"kubernetes.io/projected/f4477e45-6d29-4717-9168-8cf291295a40-kube-api-access-qklp4\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.213903 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-ovndb-tls-certs\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.213931 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-public-tls-certs\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.214065 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-combined-ca-bundle\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.214081 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-httpd-config\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.214137 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-internal-tls-certs\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.214857 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.214873 4625 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.214885 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.214895 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.243999 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-internal-tls-certs\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.251598 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-public-tls-certs\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.263120 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-ovndb-tls-certs\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.264289 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-combined-ca-bundle\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.264882 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-config" (OuterVolumeSpecName: "config") pod "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" (UID: "94fd9386-b9a5-4617-ba8d-e985bbfcf8f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.277687 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-config\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.278071 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f4477e45-6d29-4717-9168-8cf291295a40-httpd-config\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.289128 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55fdff466d-bbrr5" podStartSLOduration=8.289080096 podStartE2EDuration="8.289080096s" podCreationTimestamp="2025-12-02 14:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:32.23101447 +0000 UTC m=+1288.193191545" watchObservedRunningTime="2025-12-02 14:05:32.289080096 +0000 UTC m=+1288.251257171" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.304569 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qklp4\" (UniqueName: \"kubernetes.io/projected/f4477e45-6d29-4717-9168-8cf291295a40-kube-api-access-qklp4\") pod \"neutron-8c746598f-ss7rg\" (UID: \"f4477e45-6d29-4717-9168-8cf291295a40\") " pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.318677 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.399763 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.603151 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-hxbrp"] Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.636512 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-hxbrp"] Dec 02 14:05:32 crc kubenswrapper[4625]: I1202 14:05:32.870994 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94fd9386-b9a5-4617-ba8d-e985bbfcf8f0" path="/var/lib/kubelet/pods/94fd9386-b9a5-4617-ba8d-e985bbfcf8f0/volumes" Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.251979 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3e953237-032c-443c-bb18-86369f783b77","Type":"ContainerStarted","Data":"8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476"} Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.252225 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3e953237-032c-443c-bb18-86369f783b77" containerName="glance-log" containerID="cri-o://76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2" gracePeriod=30 Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.252473 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3e953237-032c-443c-bb18-86369f783b77" containerName="glance-httpd" containerID="cri-o://8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476" gracePeriod=30 Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.279680 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb409327-65b9-4713-ab91-8a9ed9c84821","Type":"ContainerStarted","Data":"b458754cc98d2b8c79e37935f1931ae0c5aad7301c04ff07284ac562f6dc0726"} Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.279893 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bb409327-65b9-4713-ab91-8a9ed9c84821" containerName="glance-log" containerID="cri-o://0a3e9bf30b2c952a2ef87d5b43eeff8d5bcb60007f31ebf37cad839dae64e5c2" gracePeriod=30 Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.280024 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bb409327-65b9-4713-ab91-8a9ed9c84821" containerName="glance-httpd" containerID="cri-o://b458754cc98d2b8c79e37935f1931ae0c5aad7301c04ff07284ac562f6dc0726" gracePeriod=30 Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.289686 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" event={"ID":"2becaaf4-3b57-4612-b20e-ef1b93b563d1","Type":"ContainerStarted","Data":"9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d"} Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.290028 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.318513 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.318483533 podStartE2EDuration="8.318483533s" podCreationTimestamp="2025-12-02 14:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:33.290261852 +0000 UTC m=+1289.252438927" watchObservedRunningTime="2025-12-02 14:05:33.318483533 +0000 UTC m=+1289.280660608" Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.400990 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" podStartSLOduration=8.400935587 podStartE2EDuration="8.400935587s" podCreationTimestamp="2025-12-02 14:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:33.335226525 +0000 UTC m=+1289.297403620" watchObservedRunningTime="2025-12-02 14:05:33.400935587 +0000 UTC m=+1289.363112672" Dec 02 14:05:33 crc kubenswrapper[4625]: I1202 14:05:33.433428 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.433403133 podStartE2EDuration="9.433403133s" podCreationTimestamp="2025-12-02 14:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:33.393695962 +0000 UTC m=+1289.355873057" watchObservedRunningTime="2025-12-02 14:05:33.433403133 +0000 UTC m=+1289.395580208" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.091980 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8c746598f-ss7rg"] Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.354889 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.418104 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c746598f-ss7rg" event={"ID":"f4477e45-6d29-4717-9168-8cf291295a40","Type":"ContainerStarted","Data":"50dcfc0d4297bdfc8bb7e11963157c60ee1839c36b350b6e83e0d0d714048b08"} Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.435772 4625 generic.go:334] "Generic (PLEG): container finished" podID="3e953237-032c-443c-bb18-86369f783b77" containerID="8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476" exitCode=143 Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.435831 4625 generic.go:334] "Generic (PLEG): container finished" podID="3e953237-032c-443c-bb18-86369f783b77" containerID="76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2" exitCode=143 Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.435910 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.435947 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3e953237-032c-443c-bb18-86369f783b77","Type":"ContainerDied","Data":"8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476"} Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.436010 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3e953237-032c-443c-bb18-86369f783b77","Type":"ContainerDied","Data":"76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2"} Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.436022 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3e953237-032c-443c-bb18-86369f783b77","Type":"ContainerDied","Data":"fb01de7efe43e1a1133659ad14221db5605cbe8acfa64de64981bed7a582ba41"} Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.436038 4625 scope.go:117] "RemoveContainer" containerID="8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.498074 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-config-data\") pod \"3e953237-032c-443c-bb18-86369f783b77\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.498242 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"3e953237-032c-443c-bb18-86369f783b77\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.499567 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-httpd-run\") pod \"3e953237-032c-443c-bb18-86369f783b77\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.499614 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-logs\") pod \"3e953237-032c-443c-bb18-86369f783b77\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.499669 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-combined-ca-bundle\") pod \"3e953237-032c-443c-bb18-86369f783b77\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.499720 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd8n7\" (UniqueName: \"kubernetes.io/projected/3e953237-032c-443c-bb18-86369f783b77-kube-api-access-vd8n7\") pod \"3e953237-032c-443c-bb18-86369f783b77\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.499800 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-scripts\") pod \"3e953237-032c-443c-bb18-86369f783b77\" (UID: \"3e953237-032c-443c-bb18-86369f783b77\") " Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.503902 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3e953237-032c-443c-bb18-86369f783b77" (UID: "3e953237-032c-443c-bb18-86369f783b77"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.505593 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-logs" (OuterVolumeSpecName: "logs") pod "3e953237-032c-443c-bb18-86369f783b77" (UID: "3e953237-032c-443c-bb18-86369f783b77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.508232 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "3e953237-032c-443c-bb18-86369f783b77" (UID: "3e953237-032c-443c-bb18-86369f783b77"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.544657 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e953237-032c-443c-bb18-86369f783b77-kube-api-access-vd8n7" (OuterVolumeSpecName: "kube-api-access-vd8n7") pod "3e953237-032c-443c-bb18-86369f783b77" (UID: "3e953237-032c-443c-bb18-86369f783b77"). InnerVolumeSpecName "kube-api-access-vd8n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.544819 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-scripts" (OuterVolumeSpecName: "scripts") pod "3e953237-032c-443c-bb18-86369f783b77" (UID: "3e953237-032c-443c-bb18-86369f783b77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.594029 4625 generic.go:334] "Generic (PLEG): container finished" podID="bb409327-65b9-4713-ab91-8a9ed9c84821" containerID="b458754cc98d2b8c79e37935f1931ae0c5aad7301c04ff07284ac562f6dc0726" exitCode=143 Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.594096 4625 generic.go:334] "Generic (PLEG): container finished" podID="bb409327-65b9-4713-ab91-8a9ed9c84821" containerID="0a3e9bf30b2c952a2ef87d5b43eeff8d5bcb60007f31ebf37cad839dae64e5c2" exitCode=143 Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.595461 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb409327-65b9-4713-ab91-8a9ed9c84821","Type":"ContainerDied","Data":"b458754cc98d2b8c79e37935f1931ae0c5aad7301c04ff07284ac562f6dc0726"} Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.595508 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb409327-65b9-4713-ab91-8a9ed9c84821","Type":"ContainerDied","Data":"0a3e9bf30b2c952a2ef87d5b43eeff8d5bcb60007f31ebf37cad839dae64e5c2"} Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.617029 4625 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.617074 4625 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.617091 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e953237-032c-443c-bb18-86369f783b77-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.617104 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd8n7\" (UniqueName: \"kubernetes.io/projected/3e953237-032c-443c-bb18-86369f783b77-kube-api-access-vd8n7\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.617117 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.639164 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-config-data" (OuterVolumeSpecName: "config-data") pod "3e953237-032c-443c-bb18-86369f783b77" (UID: "3e953237-032c-443c-bb18-86369f783b77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.646575 4625 scope.go:117] "RemoveContainer" containerID="76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.687500 4625 scope.go:117] "RemoveContainer" containerID="8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476" Dec 02 14:05:34 crc kubenswrapper[4625]: E1202 14:05:34.689954 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476\": container with ID starting with 8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476 not found: ID does not exist" containerID="8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.690070 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476"} err="failed to get container status \"8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476\": rpc error: code = NotFound desc = could not find container \"8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476\": container with ID starting with 8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476 not found: ID does not exist" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.690138 4625 scope.go:117] "RemoveContainer" containerID="76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2" Dec 02 14:05:34 crc kubenswrapper[4625]: E1202 14:05:34.691647 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2\": container with ID starting with 76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2 not found: ID does not exist" containerID="76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.691701 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2"} err="failed to get container status \"76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2\": rpc error: code = NotFound desc = could not find container \"76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2\": container with ID starting with 76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2 not found: ID does not exist" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.691724 4625 scope.go:117] "RemoveContainer" containerID="8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.694784 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476"} err="failed to get container status \"8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476\": rpc error: code = NotFound desc = could not find container \"8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476\": container with ID starting with 8946fe3a56645c883c075b99f2cf302e86a7b8fec31d8a3c0fa980ea0f8af476 not found: ID does not exist" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.694819 4625 scope.go:117] "RemoveContainer" containerID="76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.705763 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2"} err="failed to get container status \"76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2\": rpc error: code = NotFound desc = could not find container \"76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2\": container with ID starting with 76da6ea9eb081a93c1ec0f8e067da0b9647fb113e7f44133f390dc8d56050dd2 not found: ID does not exist" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.720865 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.734086 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e953237-032c-443c-bb18-86369f783b77" (UID: "3e953237-032c-443c-bb18-86369f783b77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.752223 4625 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.829652 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e953237-032c-443c-bb18-86369f783b77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.830503 4625 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.839991 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.907649 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.908051 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:34 crc kubenswrapper[4625]: E1202 14:05:34.909071 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e953237-032c-443c-bb18-86369f783b77" containerName="glance-httpd" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.909162 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e953237-032c-443c-bb18-86369f783b77" containerName="glance-httpd" Dec 02 14:05:34 crc kubenswrapper[4625]: E1202 14:05:34.909232 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e953237-032c-443c-bb18-86369f783b77" containerName="glance-log" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.909283 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e953237-032c-443c-bb18-86369f783b77" containerName="glance-log" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.909596 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e953237-032c-443c-bb18-86369f783b77" containerName="glance-httpd" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.909683 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e953237-032c-443c-bb18-86369f783b77" containerName="glance-log" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.911327 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.917623 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.934828 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:34 crc kubenswrapper[4625]: I1202 14:05:34.954196 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.041099 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.041157 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.041229 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.041258 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.041303 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxbn5\" (UniqueName: \"kubernetes.io/projected/39cda872-2de1-4f58-9eda-16328ffa31ac-kube-api-access-hxbn5\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.041539 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.041576 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-logs\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.041601 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.047268 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145106 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvwtr\" (UniqueName: \"kubernetes.io/projected/bb409327-65b9-4713-ab91-8a9ed9c84821-kube-api-access-fvwtr\") pod \"bb409327-65b9-4713-ab91-8a9ed9c84821\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145236 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-config-data\") pod \"bb409327-65b9-4713-ab91-8a9ed9c84821\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145258 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-combined-ca-bundle\") pod \"bb409327-65b9-4713-ab91-8a9ed9c84821\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145352 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"bb409327-65b9-4713-ab91-8a9ed9c84821\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145456 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-scripts\") pod \"bb409327-65b9-4713-ab91-8a9ed9c84821\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145492 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-logs\") pod \"bb409327-65b9-4713-ab91-8a9ed9c84821\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145513 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-httpd-run\") pod \"bb409327-65b9-4713-ab91-8a9ed9c84821\" (UID: \"bb409327-65b9-4713-ab91-8a9ed9c84821\") " Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145831 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145868 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145914 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145950 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.145988 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxbn5\" (UniqueName: \"kubernetes.io/projected/39cda872-2de1-4f58-9eda-16328ffa31ac-kube-api-access-hxbn5\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.146072 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.146098 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-logs\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.146115 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.146386 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.149798 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-logs" (OuterVolumeSpecName: "logs") pod "bb409327-65b9-4713-ab91-8a9ed9c84821" (UID: "bb409327-65b9-4713-ab91-8a9ed9c84821"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.150086 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bb409327-65b9-4713-ab91-8a9ed9c84821" (UID: "bb409327-65b9-4713-ab91-8a9ed9c84821"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.150674 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.165793 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-logs\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.167364 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.176347 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-scripts\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.177760 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-scripts" (OuterVolumeSpecName: "scripts") pod "bb409327-65b9-4713-ab91-8a9ed9c84821" (UID: "bb409327-65b9-4713-ab91-8a9ed9c84821"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.182704 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.192568 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "bb409327-65b9-4713-ab91-8a9ed9c84821" (UID: "bb409327-65b9-4713-ab91-8a9ed9c84821"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.193793 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-config-data\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.203765 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxbn5\" (UniqueName: \"kubernetes.io/projected/39cda872-2de1-4f58-9eda-16328ffa31ac-kube-api-access-hxbn5\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.217357 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb409327-65b9-4713-ab91-8a9ed9c84821-kube-api-access-fvwtr" (OuterVolumeSpecName: "kube-api-access-fvwtr") pod "bb409327-65b9-4713-ab91-8a9ed9c84821" (UID: "bb409327-65b9-4713-ab91-8a9ed9c84821"). InnerVolumeSpecName "kube-api-access-fvwtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.258093 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvwtr\" (UniqueName: \"kubernetes.io/projected/bb409327-65b9-4713-ab91-8a9ed9c84821-kube-api-access-fvwtr\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.258872 4625 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.258919 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.258931 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.258940 4625 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb409327-65b9-4713-ab91-8a9ed9c84821-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.312777 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb409327-65b9-4713-ab91-8a9ed9c84821" (UID: "bb409327-65b9-4713-ab91-8a9ed9c84821"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.360529 4625 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.372991 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.377851 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.378245 4625 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.427664 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-config-data" (OuterVolumeSpecName: "config-data") pod "bb409327-65b9-4713-ab91-8a9ed9c84821" (UID: "bb409327-65b9-4713-ab91-8a9ed9c84821"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.480379 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb409327-65b9-4713-ab91-8a9ed9c84821-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.599259 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.619606 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb409327-65b9-4713-ab91-8a9ed9c84821","Type":"ContainerDied","Data":"c9f0a593e2b74e8aecd04b4ace1efb849cad2e87751a703df5b2f6b4f68df887"} Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.619648 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.619789 4625 scope.go:117] "RemoveContainer" containerID="b458754cc98d2b8c79e37935f1931ae0c5aad7301c04ff07284ac562f6dc0726" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.663875 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c746598f-ss7rg" event={"ID":"f4477e45-6d29-4717-9168-8cf291295a40","Type":"ContainerStarted","Data":"d6ee2495cfba1c88d78e9bdff2c0e42ea928e1d25f26b9a0943b747de0cbeb45"} Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.663945 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8c746598f-ss7rg" event={"ID":"f4477e45-6d29-4717-9168-8cf291295a40","Type":"ContainerStarted","Data":"1f61a17700f8909c72d6dfc03a55a97395ac9a34f9c271fc333ee1b9078c1b64"} Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.731879 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.821757 4625 scope.go:117] "RemoveContainer" containerID="0a3e9bf30b2c952a2ef87d5b43eeff8d5bcb60007f31ebf37cad839dae64e5c2" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.852895 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.918217 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:35 crc kubenswrapper[4625]: E1202 14:05:35.918801 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb409327-65b9-4713-ab91-8a9ed9c84821" containerName="glance-httpd" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.918819 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb409327-65b9-4713-ab91-8a9ed9c84821" containerName="glance-httpd" Dec 02 14:05:35 crc kubenswrapper[4625]: E1202 14:05:35.918856 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb409327-65b9-4713-ab91-8a9ed9c84821" containerName="glance-log" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.918862 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb409327-65b9-4713-ab91-8a9ed9c84821" containerName="glance-log" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.919061 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb409327-65b9-4713-ab91-8a9ed9c84821" containerName="glance-httpd" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.919087 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb409327-65b9-4713-ab91-8a9ed9c84821" containerName="glance-log" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.920214 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.944643 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 14:05:35 crc kubenswrapper[4625]: I1202 14:05:35.967240 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.019683 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.020338 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69sjb\" (UniqueName: \"kubernetes.io/projected/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-kube-api-access-69sjb\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.020467 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.020552 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.020575 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.020600 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.020633 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-logs\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.020666 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.047019 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.095697 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.095791 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.124883 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.124963 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.125179 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.125197 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.125217 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-logs\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.125239 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.125408 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.125426 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69sjb\" (UniqueName: \"kubernetes.io/projected/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-kube-api-access-69sjb\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.126273 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.127641 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-logs\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.127917 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.166348 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.166402 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.173103 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-config-data\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.189147 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69sjb\" (UniqueName: \"kubernetes.io/projected/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-kube-api-access-69sjb\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.197710 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-scripts\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.202075 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.293706 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.358478 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.359003 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.543011 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.697010 4625 generic.go:334] "Generic (PLEG): container finished" podID="d7887abf-7df6-4058-b3f0-e58295b168c1" containerID="893172c1648c0029902621395892771df41fb07b84730fb9215235b0335e2c67" exitCode=0 Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.697764 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2swhw" event={"ID":"d7887abf-7df6-4058-b3f0-e58295b168c1","Type":"ContainerDied","Data":"893172c1648c0029902621395892771df41fb07b84730fb9215235b0335e2c67"} Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.735106 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"39cda872-2de1-4f58-9eda-16328ffa31ac","Type":"ContainerStarted","Data":"cffdbb07ec9c92386cae0863e94f8961c3231c4ad73e5dcad2a8d19e25e7115c"} Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.738464 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.831333 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8c746598f-ss7rg" podStartSLOduration=5.831283886 podStartE2EDuration="5.831283886s" podCreationTimestamp="2025-12-02 14:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:36.784606097 +0000 UTC m=+1292.746783162" watchObservedRunningTime="2025-12-02 14:05:36.831283886 +0000 UTC m=+1292.793460951" Dec 02 14:05:36 crc kubenswrapper[4625]: E1202 14:05:36.886580 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7887abf_7df6_4058_b3f0_e58295b168c1.slice/crio-conmon-893172c1648c0029902621395892771df41fb07b84730fb9215235b0335e2c67.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7887abf_7df6_4058_b3f0_e58295b168c1.slice/crio-893172c1648c0029902621395892771df41fb07b84730fb9215235b0335e2c67.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.951302 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e953237-032c-443c-bb18-86369f783b77" path="/var/lib/kubelet/pods/3e953237-032c-443c-bb18-86369f783b77/volumes" Dec 02 14:05:36 crc kubenswrapper[4625]: I1202 14:05:36.952338 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb409327-65b9-4713-ab91-8a9ed9c84821" path="/var/lib/kubelet/pods/bb409327-65b9-4713-ab91-8a9ed9c84821/volumes" Dec 02 14:05:37 crc kubenswrapper[4625]: I1202 14:05:37.404175 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:05:37 crc kubenswrapper[4625]: I1202 14:05:37.757562 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d","Type":"ContainerStarted","Data":"90c6b34f897c35f7e70ef83d1e4d9626d80b83568e1716a598a80217f7c6eaf9"} Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.624472 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2swhw" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.699690 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-config-data\") pod \"d7887abf-7df6-4058-b3f0-e58295b168c1\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.699900 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r27b6\" (UniqueName: \"kubernetes.io/projected/d7887abf-7df6-4058-b3f0-e58295b168c1-kube-api-access-r27b6\") pod \"d7887abf-7df6-4058-b3f0-e58295b168c1\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.699950 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-combined-ca-bundle\") pod \"d7887abf-7df6-4058-b3f0-e58295b168c1\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.700015 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7887abf-7df6-4058-b3f0-e58295b168c1-logs\") pod \"d7887abf-7df6-4058-b3f0-e58295b168c1\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.700115 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-scripts\") pod \"d7887abf-7df6-4058-b3f0-e58295b168c1\" (UID: \"d7887abf-7df6-4058-b3f0-e58295b168c1\") " Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.703838 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7887abf-7df6-4058-b3f0-e58295b168c1-logs" (OuterVolumeSpecName: "logs") pod "d7887abf-7df6-4058-b3f0-e58295b168c1" (UID: "d7887abf-7df6-4058-b3f0-e58295b168c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.725063 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-scripts" (OuterVolumeSpecName: "scripts") pod "d7887abf-7df6-4058-b3f0-e58295b168c1" (UID: "d7887abf-7df6-4058-b3f0-e58295b168c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.741997 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7887abf-7df6-4058-b3f0-e58295b168c1-kube-api-access-r27b6" (OuterVolumeSpecName: "kube-api-access-r27b6") pod "d7887abf-7df6-4058-b3f0-e58295b168c1" (UID: "d7887abf-7df6-4058-b3f0-e58295b168c1"). InnerVolumeSpecName "kube-api-access-r27b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.785215 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-config-data" (OuterVolumeSpecName: "config-data") pod "d7887abf-7df6-4058-b3f0-e58295b168c1" (UID: "d7887abf-7df6-4058-b3f0-e58295b168c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.796558 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7887abf-7df6-4058-b3f0-e58295b168c1" (UID: "d7887abf-7df6-4058-b3f0-e58295b168c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.804419 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.804453 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r27b6\" (UniqueName: \"kubernetes.io/projected/d7887abf-7df6-4058-b3f0-e58295b168c1-kube-api-access-r27b6\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.804469 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.804480 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7887abf-7df6-4058-b3f0-e58295b168c1-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.804488 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7887abf-7df6-4058-b3f0-e58295b168c1-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.804650 4625 generic.go:334] "Generic (PLEG): container finished" podID="c2b840d2-7458-4769-9650-e62ff8676008" containerID="09a925c91fc8440516ea11a14fc8c3dfdcb74f05f26fc5cde8087f4c21ccbf41" exitCode=0 Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.804720 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fdkpr" event={"ID":"c2b840d2-7458-4769-9650-e62ff8676008","Type":"ContainerDied","Data":"09a925c91fc8440516ea11a14fc8c3dfdcb74f05f26fc5cde8087f4c21ccbf41"} Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.815004 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"39cda872-2de1-4f58-9eda-16328ffa31ac","Type":"ContainerStarted","Data":"fd2db654ed13b96cd00c49d3e65ecd785026bd289707eb086e72d1ca866e19a2"} Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.821862 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2swhw" event={"ID":"d7887abf-7df6-4058-b3f0-e58295b168c1","Type":"ContainerDied","Data":"aea39930c90741ba0ab23f18d8d1e76b877e35986e2177d81786f53fb96387af"} Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.821918 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea39930c90741ba0ab23f18d8d1e76b877e35986e2177d81786f53fb96387af" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.821988 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2swhw" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.949245 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5855fb4fd-8xvmf"] Dec 02 14:05:38 crc kubenswrapper[4625]: E1202 14:05:38.949629 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7887abf-7df6-4058-b3f0-e58295b168c1" containerName="placement-db-sync" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.949643 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7887abf-7df6-4058-b3f0-e58295b168c1" containerName="placement-db-sync" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.949829 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7887abf-7df6-4058-b3f0-e58295b168c1" containerName="placement-db-sync" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.950912 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5855fb4fd-8xvmf"] Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.951015 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.954496 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.954994 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zlllv" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.955205 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.955411 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 02 14:05:38 crc kubenswrapper[4625]: I1202 14:05:38.962036 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.007203 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-public-tls-certs\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.007381 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-scripts\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.007475 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcjg5\" (UniqueName: \"kubernetes.io/projected/dc3dac2b-e3ca-4fde-b347-598e80af89ce-kube-api-access-bcjg5\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.007635 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-config-data\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.010408 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-internal-tls-certs\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.010741 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-combined-ca-bundle\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.010824 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3dac2b-e3ca-4fde-b347-598e80af89ce-logs\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.112738 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-internal-tls-certs\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.112833 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-combined-ca-bundle\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.112860 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3dac2b-e3ca-4fde-b347-598e80af89ce-logs\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.112986 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-public-tls-certs\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.113007 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-scripts\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.113121 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcjg5\" (UniqueName: \"kubernetes.io/projected/dc3dac2b-e3ca-4fde-b347-598e80af89ce-kube-api-access-bcjg5\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.113192 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-config-data\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.114440 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3dac2b-e3ca-4fde-b347-598e80af89ce-logs\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.131546 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-config-data\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.132940 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-public-tls-certs\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.133522 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-scripts\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.134007 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-combined-ca-bundle\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.137754 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3dac2b-e3ca-4fde-b347-598e80af89ce-internal-tls-certs\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.142945 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcjg5\" (UniqueName: \"kubernetes.io/projected/dc3dac2b-e3ca-4fde-b347-598e80af89ce-kube-api-access-bcjg5\") pod \"placement-5855fb4fd-8xvmf\" (UID: \"dc3dac2b-e3ca-4fde-b347-598e80af89ce\") " pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.301484 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.859055 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"39cda872-2de1-4f58-9eda-16328ffa31ac","Type":"ContainerStarted","Data":"d74e9a467dab30b830e84a2a40c7115b6c525d66bc102bfb933e87f700ba68e0"} Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.872758 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d","Type":"ContainerStarted","Data":"56c3f6cb475a41b5dd8f73fb6596069ce9425fc8aaba665f13f9e58eda5f2976"} Dec 02 14:05:39 crc kubenswrapper[4625]: I1202 14:05:39.916971 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.916912797 podStartE2EDuration="5.916912797s" podCreationTimestamp="2025-12-02 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:39.901204023 +0000 UTC m=+1295.863381118" watchObservedRunningTime="2025-12-02 14:05:39.916912797 +0000 UTC m=+1295.879089892" Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.178601 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5855fb4fd-8xvmf"] Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.585457 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.617003 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.681463 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qjmj\" (UniqueName: \"kubernetes.io/projected/c2b840d2-7458-4769-9650-e62ff8676008-kube-api-access-9qjmj\") pod \"c2b840d2-7458-4769-9650-e62ff8676008\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.681521 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-combined-ca-bundle\") pod \"c2b840d2-7458-4769-9650-e62ff8676008\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.681655 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-db-sync-config-data\") pod \"c2b840d2-7458-4769-9650-e62ff8676008\" (UID: \"c2b840d2-7458-4769-9650-e62ff8676008\") " Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.739349 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qr7hw"] Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.739719 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" podUID="ef2b49b4-2807-4a26-8876-3cc189692b73" containerName="dnsmasq-dns" containerID="cri-o://84273fd170c0695e5976de2c52c5c3eb60351d7db691337c9e63218b130c0ec8" gracePeriod=10 Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.793583 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b840d2-7458-4769-9650-e62ff8676008-kube-api-access-9qjmj" (OuterVolumeSpecName: "kube-api-access-9qjmj") pod "c2b840d2-7458-4769-9650-e62ff8676008" (UID: "c2b840d2-7458-4769-9650-e62ff8676008"). InnerVolumeSpecName "kube-api-access-9qjmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.793606 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c2b840d2-7458-4769-9650-e62ff8676008" (UID: "c2b840d2-7458-4769-9650-e62ff8676008"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.887753 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qjmj\" (UniqueName: \"kubernetes.io/projected/c2b840d2-7458-4769-9650-e62ff8676008-kube-api-access-9qjmj\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.887788 4625 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.979409 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fdkpr" event={"ID":"c2b840d2-7458-4769-9650-e62ff8676008","Type":"ContainerDied","Data":"135159ad9c1fadaa0cafd79a9e6209651a8d002335e4760d482873b674bbcce7"} Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.979911 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="135159ad9c1fadaa0cafd79a9e6209651a8d002335e4760d482873b674bbcce7" Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.980030 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fdkpr" Dec 02 14:05:40 crc kubenswrapper[4625]: I1202 14:05:40.995267 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5855fb4fd-8xvmf" event={"ID":"dc3dac2b-e3ca-4fde-b347-598e80af89ce","Type":"ContainerStarted","Data":"6ed40937ff641d2a9ed8f490694acc530d3f0df3e9b9dbd12e903846221879b7"} Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.009877 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2b840d2-7458-4769-9650-e62ff8676008" (UID: "c2b840d2-7458-4769-9650-e62ff8676008"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.034453 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-blvbp" event={"ID":"c29ce362-3978-4713-833d-49aab29a394c","Type":"ContainerStarted","Data":"da74b31f811c5383a2f75c8c86c15ae7cd8f586c752c735f8fdb5f8d2612694e"} Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.067794 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-blvbp" podStartSLOduration=7.98270236 podStartE2EDuration="1m6.067763537s" podCreationTimestamp="2025-12-02 14:04:35 +0000 UTC" firstStartedPulling="2025-12-02 14:04:40.65994868 +0000 UTC m=+1236.622125755" lastFinishedPulling="2025-12-02 14:05:38.745009857 +0000 UTC m=+1294.707186932" observedRunningTime="2025-12-02 14:05:41.065920657 +0000 UTC m=+1297.028097732" watchObservedRunningTime="2025-12-02 14:05:41.067763537 +0000 UTC m=+1297.029940612" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.107103 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b840d2-7458-4769-9650-e62ff8676008-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.270060 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68c4cddcdc-kxpt7"] Dec 02 14:05:41 crc kubenswrapper[4625]: E1202 14:05:41.271365 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b840d2-7458-4769-9650-e62ff8676008" containerName="barbican-db-sync" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.271401 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b840d2-7458-4769-9650-e62ff8676008" containerName="barbican-db-sync" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.271738 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b840d2-7458-4769-9650-e62ff8676008" containerName="barbican-db-sync" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.275126 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.286184 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.382877 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68c4cddcdc-kxpt7"] Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.415473 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183dcad1-443e-47e0-bc13-d98d7c316069-config-data\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.416076 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpnsb\" (UniqueName: \"kubernetes.io/projected/183dcad1-443e-47e0-bc13-d98d7c316069-kube-api-access-dpnsb\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.416210 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183dcad1-443e-47e0-bc13-d98d7c316069-combined-ca-bundle\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.416341 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183dcad1-443e-47e0-bc13-d98d7c316069-config-data-custom\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.416482 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183dcad1-443e-47e0-bc13-d98d7c316069-logs\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.446420 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5d57b47bd4-2hxfs"] Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.448817 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.470533 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.524394 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-combined-ca-bundle\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.524985 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-config-data\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.525024 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183dcad1-443e-47e0-bc13-d98d7c316069-config-data\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.525121 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpnsb\" (UniqueName: \"kubernetes.io/projected/183dcad1-443e-47e0-bc13-d98d7c316069-kube-api-access-dpnsb\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.525159 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183dcad1-443e-47e0-bc13-d98d7c316069-combined-ca-bundle\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.525190 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183dcad1-443e-47e0-bc13-d98d7c316069-config-data-custom\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.525228 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183dcad1-443e-47e0-bc13-d98d7c316069-logs\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.525276 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-logs\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.525338 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvk2k\" (UniqueName: \"kubernetes.io/projected/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-kube-api-access-pvk2k\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.529144 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183dcad1-443e-47e0-bc13-d98d7c316069-logs\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.549012 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d57b47bd4-2hxfs"] Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.557619 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-config-data-custom\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.561858 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183dcad1-443e-47e0-bc13-d98d7c316069-combined-ca-bundle\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.617066 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/183dcad1-443e-47e0-bc13-d98d7c316069-config-data-custom\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.626390 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183dcad1-443e-47e0-bc13-d98d7c316069-config-data\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.663526 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpnsb\" (UniqueName: \"kubernetes.io/projected/183dcad1-443e-47e0-bc13-d98d7c316069-kube-api-access-dpnsb\") pod \"barbican-worker-68c4cddcdc-kxpt7\" (UID: \"183dcad1-443e-47e0-bc13-d98d7c316069\") " pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.673156 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-combined-ca-bundle\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.673234 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-config-data\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.673383 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-logs\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.673435 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvk2k\" (UniqueName: \"kubernetes.io/projected/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-kube-api-access-pvk2k\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.673474 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-config-data-custom\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.690293 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-n2q97"] Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.693775 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-logs\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.705290 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.713081 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-combined-ca-bundle\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.713163 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-config-data-custom\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.719067 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-config-data\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.723547 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-n2q97"] Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.763205 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvk2k\" (UniqueName: \"kubernetes.io/projected/ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d-kube-api-access-pvk2k\") pod \"barbican-keystone-listener-5d57b47bd4-2hxfs\" (UID: \"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d\") " pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.882772 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn864\" (UniqueName: \"kubernetes.io/projected/88761c55-9427-4195-a3c9-c54a5b8554c8-kube-api-access-vn864\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.882934 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.882961 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.882985 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-config\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.883009 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.883064 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.942401 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b967fc68b-flqxx"] Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.945156 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68c4cddcdc-kxpt7" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.975120 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.980764 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.990302 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.990514 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn864\" (UniqueName: \"kubernetes.io/projected/88761c55-9427-4195-a3c9-c54a5b8554c8-kube-api-access-vn864\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.990802 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.990847 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.990892 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-config\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:41 crc kubenswrapper[4625]: I1202 14:05:41.990935 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:41.999954 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.021108 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b967fc68b-flqxx"] Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.043569 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-svc\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.116445 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-combined-ca-bundle\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.121899 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.121999 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.117449 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-config\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.121359 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.122452 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvk7j\" (UniqueName: \"kubernetes.io/projected/6305c486-3db4-4bea-8d41-555d94ea0e5d-kube-api-access-tvk7j\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.122862 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.123067 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data-custom\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.123175 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6305c486-3db4-4bea-8d41-555d94ea0e5d-logs\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.160423 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn864\" (UniqueName: \"kubernetes.io/projected/88761c55-9427-4195-a3c9-c54a5b8554c8-kube-api-access-vn864\") pod \"dnsmasq-dns-85ff748b95-n2q97\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.224547 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d","Type":"ContainerStarted","Data":"5db026b1ff3c76612f197105849c16890f68a14a2db39a36371700cc1d4b55a0"} Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.260609 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data-custom\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.260709 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6305c486-3db4-4bea-8d41-555d94ea0e5d-logs\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.260885 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-combined-ca-bundle\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.261098 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.261221 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvk7j\" (UniqueName: \"kubernetes.io/projected/6305c486-3db4-4bea-8d41-555d94ea0e5d-kube-api-access-tvk7j\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.261773 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6305c486-3db4-4bea-8d41-555d94ea0e5d-logs\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.300396 4625 generic.go:334] "Generic (PLEG): container finished" podID="57d79055-dea2-4cd4-a642-b63d2deaa339" containerID="215f9812fe59267de0032da8a69efef7f944d128864bdbd4cf383ef1b0597e2e" exitCode=0 Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.300554 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w2qlr" event={"ID":"57d79055-dea2-4cd4-a642-b63d2deaa339","Type":"ContainerDied","Data":"215f9812fe59267de0032da8a69efef7f944d128864bdbd4cf383ef1b0597e2e"} Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.301054 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.306588 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-combined-ca-bundle\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.306816 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.3067803399999995 podStartE2EDuration="7.30678034s" podCreationTimestamp="2025-12-02 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:42.288756437 +0000 UTC m=+1298.250933512" watchObservedRunningTime="2025-12-02 14:05:42.30678034 +0000 UTC m=+1298.268957415" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.341553 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.347197 4625 generic.go:334] "Generic (PLEG): container finished" podID="ef2b49b4-2807-4a26-8876-3cc189692b73" containerID="84273fd170c0695e5976de2c52c5c3eb60351d7db691337c9e63218b130c0ec8" exitCode=0 Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.347277 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" event={"ID":"ef2b49b4-2807-4a26-8876-3cc189692b73","Type":"ContainerDied","Data":"84273fd170c0695e5976de2c52c5c3eb60351d7db691337c9e63218b130c0ec8"} Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.349732 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5855fb4fd-8xvmf" event={"ID":"dc3dac2b-e3ca-4fde-b347-598e80af89ce","Type":"ContainerStarted","Data":"ea383bdb9c8352825a7200ee64954e766a9651cec45267d3ba86cd215cefc95a"} Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.462971 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data-custom\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.490979 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvk7j\" (UniqueName: \"kubernetes.io/projected/6305c486-3db4-4bea-8d41-555d94ea0e5d-kube-api-access-tvk7j\") pod \"barbican-api-7b967fc68b-flqxx\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.677565 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.754727 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.904649 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8n99\" (UniqueName: \"kubernetes.io/projected/ef2b49b4-2807-4a26-8876-3cc189692b73-kube-api-access-c8n99\") pod \"ef2b49b4-2807-4a26-8876-3cc189692b73\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.927615 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-nb\") pod \"ef2b49b4-2807-4a26-8876-3cc189692b73\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.927761 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-sb\") pod \"ef2b49b4-2807-4a26-8876-3cc189692b73\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.927849 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-swift-storage-0\") pod \"ef2b49b4-2807-4a26-8876-3cc189692b73\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.927887 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-svc\") pod \"ef2b49b4-2807-4a26-8876-3cc189692b73\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " Dec 02 14:05:42 crc kubenswrapper[4625]: I1202 14:05:42.927925 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-config\") pod \"ef2b49b4-2807-4a26-8876-3cc189692b73\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.001650 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2b49b4-2807-4a26-8876-3cc189692b73-kube-api-access-c8n99" (OuterVolumeSpecName: "kube-api-access-c8n99") pod "ef2b49b4-2807-4a26-8876-3cc189692b73" (UID: "ef2b49b4-2807-4a26-8876-3cc189692b73"). InnerVolumeSpecName "kube-api-access-c8n99". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.032171 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8n99\" (UniqueName: \"kubernetes.io/projected/ef2b49b4-2807-4a26-8876-3cc189692b73-kube-api-access-c8n99\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.106024 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d57b47bd4-2hxfs"] Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.265624 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-config" (OuterVolumeSpecName: "config") pod "ef2b49b4-2807-4a26-8876-3cc189692b73" (UID: "ef2b49b4-2807-4a26-8876-3cc189692b73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.271925 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef2b49b4-2807-4a26-8876-3cc189692b73" (UID: "ef2b49b4-2807-4a26-8876-3cc189692b73"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.286588 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ef2b49b4-2807-4a26-8876-3cc189692b73" (UID: "ef2b49b4-2807-4a26-8876-3cc189692b73"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.292336 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-sb\") pod \"ef2b49b4-2807-4a26-8876-3cc189692b73\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " Dec 02 14:05:43 crc kubenswrapper[4625]: W1202 14:05:43.292918 4625 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ef2b49b4-2807-4a26-8876-3cc189692b73/volumes/kubernetes.io~configmap/ovsdbserver-sb Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.292939 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef2b49b4-2807-4a26-8876-3cc189692b73" (UID: "ef2b49b4-2807-4a26-8876-3cc189692b73"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4625]: E1202 14:05:43.370752 4625 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-nb podName:ef2b49b4-2807-4a26-8876-3cc189692b73 nodeName:}" failed. No retries permitted until 2025-12-02 14:05:43.870698976 +0000 UTC m=+1299.832876051 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-nb" (UniqueName: "kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-nb") pod "ef2b49b4-2807-4a26-8876-3cc189692b73" (UID: "ef2b49b4-2807-4a26-8876-3cc189692b73") : error deleting /var/lib/kubelet/pods/ef2b49b4-2807-4a26-8876-3cc189692b73/volume-subpaths: remove /var/lib/kubelet/pods/ef2b49b4-2807-4a26-8876-3cc189692b73/volume-subpaths: no such file or directory Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.371086 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef2b49b4-2807-4a26-8876-3cc189692b73" (UID: "ef2b49b4-2807-4a26-8876-3cc189692b73"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.372094 4625 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.372119 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.372129 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.372141 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.447337 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" event={"ID":"ef2b49b4-2807-4a26-8876-3cc189692b73","Type":"ContainerDied","Data":"5d037a6279c32de28c5f68b251590ea6ab330f446284b0c2746846cd3ded9cbe"} Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.447416 4625 scope.go:117] "RemoveContainer" containerID="84273fd170c0695e5976de2c52c5c3eb60351d7db691337c9e63218b130c0ec8" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.447911 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.454710 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" event={"ID":"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d","Type":"ContainerStarted","Data":"cfe2c495fcd9bf475f7c8b03d1883929915775b91235f8d344c7932622aa6c25"} Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.465424 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5855fb4fd-8xvmf" event={"ID":"dc3dac2b-e3ca-4fde-b347-598e80af89ce","Type":"ContainerStarted","Data":"8f0ec3d54d8720abb095998d009656fd4bad4efca398e13f9539ff2e8d7df431"} Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.465488 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.465518 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.521265 4625 scope.go:117] "RemoveContainer" containerID="2163b98c42fbd777462561f6bd84b474f4f870c1d4822486c4572c9ab85873e8" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.630206 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5855fb4fd-8xvmf" podStartSLOduration=5.6301753869999995 podStartE2EDuration="5.630175387s" podCreationTimestamp="2025-12-02 14:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:05:43.501383598 +0000 UTC m=+1299.463560673" watchObservedRunningTime="2025-12-02 14:05:43.630175387 +0000 UTC m=+1299.592352462" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.727334 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68c4cddcdc-kxpt7"] Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.744354 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-n2q97"] Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.943177 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b967fc68b-flqxx"] Dec 02 14:05:43 crc kubenswrapper[4625]: W1202 14:05:43.943789 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6305c486_3db4_4bea_8d41_555d94ea0e5d.slice/crio-6898ac25fad160a6b3b8744112208525266c0f9694d45603a6cbaf944a4996e1 WatchSource:0}: Error finding container 6898ac25fad160a6b3b8744112208525266c0f9694d45603a6cbaf944a4996e1: Status 404 returned error can't find the container with id 6898ac25fad160a6b3b8744112208525266c0f9694d45603a6cbaf944a4996e1 Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.943914 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-nb\") pod \"ef2b49b4-2807-4a26-8876-3cc189692b73\" (UID: \"ef2b49b4-2807-4a26-8876-3cc189692b73\") " Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.944494 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef2b49b4-2807-4a26-8876-3cc189692b73" (UID: "ef2b49b4-2807-4a26-8876-3cc189692b73"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:05:43 crc kubenswrapper[4625]: I1202 14:05:43.946826 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef2b49b4-2807-4a26-8876-3cc189692b73-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.149834 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qr7hw"] Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.166143 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qr7hw"] Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.436721 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.459880 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-config-data\") pod \"57d79055-dea2-4cd4-a642-b63d2deaa339\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.460453 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-credential-keys\") pod \"57d79055-dea2-4cd4-a642-b63d2deaa339\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.460634 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8lfc\" (UniqueName: \"kubernetes.io/projected/57d79055-dea2-4cd4-a642-b63d2deaa339-kube-api-access-w8lfc\") pod \"57d79055-dea2-4cd4-a642-b63d2deaa339\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.460777 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-scripts\") pod \"57d79055-dea2-4cd4-a642-b63d2deaa339\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.460966 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-combined-ca-bundle\") pod \"57d79055-dea2-4cd4-a642-b63d2deaa339\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.461940 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-fernet-keys\") pod \"57d79055-dea2-4cd4-a642-b63d2deaa339\" (UID: \"57d79055-dea2-4cd4-a642-b63d2deaa339\") " Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.555348 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" event={"ID":"88761c55-9427-4195-a3c9-c54a5b8554c8","Type":"ContainerStarted","Data":"d325df2cccc66f196773df8abd480c9f0b97bb9a210a13e50d2c735ad3e3fec3"} Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.595133 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b967fc68b-flqxx" event={"ID":"6305c486-3db4-4bea-8d41-555d94ea0e5d","Type":"ContainerStarted","Data":"6898ac25fad160a6b3b8744112208525266c0f9694d45603a6cbaf944a4996e1"} Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.606279 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57d79055-dea2-4cd4-a642-b63d2deaa339" (UID: "57d79055-dea2-4cd4-a642-b63d2deaa339"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.610124 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "57d79055-dea2-4cd4-a642-b63d2deaa339" (UID: "57d79055-dea2-4cd4-a642-b63d2deaa339"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.612649 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.612690 4625 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.613752 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "57d79055-dea2-4cd4-a642-b63d2deaa339" (UID: "57d79055-dea2-4cd4-a642-b63d2deaa339"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.666211 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-scripts" (OuterVolumeSpecName: "scripts") pod "57d79055-dea2-4cd4-a642-b63d2deaa339" (UID: "57d79055-dea2-4cd4-a642-b63d2deaa339"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.666986 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d79055-dea2-4cd4-a642-b63d2deaa339-kube-api-access-w8lfc" (OuterVolumeSpecName: "kube-api-access-w8lfc") pod "57d79055-dea2-4cd4-a642-b63d2deaa339" (UID: "57d79055-dea2-4cd4-a642-b63d2deaa339"). InnerVolumeSpecName "kube-api-access-w8lfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.668586 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w2qlr" event={"ID":"57d79055-dea2-4cd4-a642-b63d2deaa339","Type":"ContainerDied","Data":"363819fb749abf0a8bf8e37d2eb3c8aa794f64054522f7f72a2b4737c7c67bd4"} Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.668630 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="363819fb749abf0a8bf8e37d2eb3c8aa794f64054522f7f72a2b4737c7c67bd4" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.668704 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w2qlr" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.677098 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c4cddcdc-kxpt7" event={"ID":"183dcad1-443e-47e0-bc13-d98d7c316069","Type":"ContainerStarted","Data":"264c235637ff4016e38fb5d18486fdd318d786ae1fad7f184dd0b09b2955cf9c"} Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.715006 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8lfc\" (UniqueName: \"kubernetes.io/projected/57d79055-dea2-4cd4-a642-b63d2deaa339-kube-api-access-w8lfc\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.715050 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.715059 4625 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.755116 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-config-data" (OuterVolumeSpecName: "config-data") pod "57d79055-dea2-4cd4-a642-b63d2deaa339" (UID: "57d79055-dea2-4cd4-a642-b63d2deaa339"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.818102 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57d79055-dea2-4cd4-a642-b63d2deaa339-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:05:44 crc kubenswrapper[4625]: I1202 14:05:44.897397 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef2b49b4-2807-4a26-8876-3cc189692b73" path="/var/lib/kubelet/pods/ef2b49b4-2807-4a26-8876-3cc189692b73/volumes" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.599805 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.600450 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.654968 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6f7c78dbd6-lsbdb"] Dec 02 14:05:45 crc kubenswrapper[4625]: E1202 14:05:45.655645 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2b49b4-2807-4a26-8876-3cc189692b73" containerName="dnsmasq-dns" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.655665 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2b49b4-2807-4a26-8876-3cc189692b73" containerName="dnsmasq-dns" Dec 02 14:05:45 crc kubenswrapper[4625]: E1202 14:05:45.655689 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d79055-dea2-4cd4-a642-b63d2deaa339" containerName="keystone-bootstrap" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.655698 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d79055-dea2-4cd4-a642-b63d2deaa339" containerName="keystone-bootstrap" Dec 02 14:05:45 crc kubenswrapper[4625]: E1202 14:05:45.655741 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2b49b4-2807-4a26-8876-3cc189692b73" containerName="init" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.655750 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2b49b4-2807-4a26-8876-3cc189692b73" containerName="init" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.656023 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef2b49b4-2807-4a26-8876-3cc189692b73" containerName="dnsmasq-dns" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.656047 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d79055-dea2-4cd4-a642-b63d2deaa339" containerName="keystone-bootstrap" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.656945 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.676704 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.677129 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.677257 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.677413 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-v2szz" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.677540 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.677684 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.702616 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f7c78dbd6-lsbdb"] Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.735130 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.737858 4625 generic.go:334] "Generic (PLEG): container finished" podID="88761c55-9427-4195-a3c9-c54a5b8554c8" containerID="4e6d2d5de34710e37838f70a6d577ba71ca58e87b01af3cb31617e688012e67a" exitCode=0 Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.737959 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" event={"ID":"88761c55-9427-4195-a3c9-c54a5b8554c8","Type":"ContainerDied","Data":"4e6d2d5de34710e37838f70a6d577ba71ca58e87b01af3cb31617e688012e67a"} Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.754373 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-credential-keys\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.754442 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4l9h\" (UniqueName: \"kubernetes.io/projected/f5f7b2e0-20a9-42b2-b323-4c813153f09f-kube-api-access-q4l9h\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.754566 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-combined-ca-bundle\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.754589 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-fernet-keys\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.754613 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-config-data\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.754636 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-scripts\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.754658 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-public-tls-certs\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.754724 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-internal-tls-certs\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.769499 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b967fc68b-flqxx" event={"ID":"6305c486-3db4-4bea-8d41-555d94ea0e5d","Type":"ContainerStarted","Data":"7155cfa7de55070cff7a7874be6fc821cc88ed501a0c167b3af01a33182482be"} Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.769601 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.796783 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.859764 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-combined-ca-bundle\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.859837 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-fernet-keys\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.859894 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-config-data\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.859931 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-scripts\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.859975 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-public-tls-certs\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.860183 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-internal-tls-certs\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.862270 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-credential-keys\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.862414 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4l9h\" (UniqueName: \"kubernetes.io/projected/f5f7b2e0-20a9-42b2-b323-4c813153f09f-kube-api-access-q4l9h\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.876072 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-internal-tls-certs\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.876553 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-credential-keys\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.878181 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-fernet-keys\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.886532 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-config-data\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.891226 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-scripts\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.893968 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-public-tls-certs\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.904039 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4l9h\" (UniqueName: \"kubernetes.io/projected/f5f7b2e0-20a9-42b2-b323-4c813153f09f-kube-api-access-q4l9h\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:45 crc kubenswrapper[4625]: I1202 14:05:45.909725 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f7b2e0-20a9-42b2-b323-4c813153f09f-combined-ca-bundle\") pod \"keystone-6f7c78dbd6-lsbdb\" (UID: \"f5f7b2e0-20a9-42b2-b323-4c813153f09f\") " pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.051055 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.104935 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.295799 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.296358 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.359437 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dc4db5bfb-zbs4l" podUID="92339196-3d33-4b76-9ba2-81e1a8373e84" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.451199 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.458470 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.665494 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f988fb4d-rk87d"] Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.670459 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.682819 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.683303 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.691415 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49phc\" (UniqueName: \"kubernetes.io/projected/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-kube-api-access-49phc\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.691526 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-public-tls-certs\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.691583 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-config-data\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.691610 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-combined-ca-bundle\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.691632 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-internal-tls-certs\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.692092 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-logs\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.692329 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-config-data-custom\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.701586 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f988fb4d-rk87d"] Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.794233 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-logs\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.794503 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-config-data-custom\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.794572 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49phc\" (UniqueName: \"kubernetes.io/projected/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-kube-api-access-49phc\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.794613 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-public-tls-certs\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.794654 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-config-data\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.794700 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-combined-ca-bundle\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.794722 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-internal-tls-certs\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.795905 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-logs\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.797262 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.797327 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.797655 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.812331 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-internal-tls-certs\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.814647 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-combined-ca-bundle\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.815335 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-config-data-custom\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.834632 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-config-data\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.835469 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-public-tls-certs\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:46 crc kubenswrapper[4625]: I1202 14:05:46.896001 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49phc\" (UniqueName: \"kubernetes.io/projected/fc88c0ad-8893-4168-bf0c-e9ed829f1b62-kube-api-access-49phc\") pod \"barbican-api-7f988fb4d-rk87d\" (UID: \"fc88c0ad-8893-4168-bf0c-e9ed829f1b62\") " pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:47 crc kubenswrapper[4625]: I1202 14:05:47.001074 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:05:47 crc kubenswrapper[4625]: I1202 14:05:47.634673 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-qr7hw" podUID="ef2b49b4-2807-4a26-8876-3cc189692b73" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Dec 02 14:05:47 crc kubenswrapper[4625]: I1202 14:05:47.819215 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:05:48 crc kubenswrapper[4625]: I1202 14:05:48.841430 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:05:48 crc kubenswrapper[4625]: I1202 14:05:48.841915 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:05:48 crc kubenswrapper[4625]: I1202 14:05:48.842002 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:05:48 crc kubenswrapper[4625]: I1202 14:05:48.842011 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:05:54 crc kubenswrapper[4625]: I1202 14:05:54.521599 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:54 crc kubenswrapper[4625]: I1202 14:05:54.522717 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:05:54 crc kubenswrapper[4625]: I1202 14:05:54.729885 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 14:05:54 crc kubenswrapper[4625]: I1202 14:05:54.907179 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 14:05:54 crc kubenswrapper[4625]: I1202 14:05:54.907377 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:05:54 crc kubenswrapper[4625]: I1202 14:05:54.922081 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 14:05:55 crc kubenswrapper[4625]: I1202 14:05:55.257596 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-55fdff466d-bbrr5" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 02 14:05:55 crc kubenswrapper[4625]: E1202 14:05:55.258691 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified" Dec 02 14:05:55 crc kubenswrapper[4625]: E1202 14:05:55.258884 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-keystone-listener-log,Image:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,Command:[/usr/bin/dumb-init],Args:[--single-child -- /usr/bin/tail -n+1 -F /var/log/barbican/barbican-keystone-listener.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n667h7dh659h5d5h7fh88h59bh658hf4h8fh5f6h5b7h675h544hdfh65ch5b6h5ddh5bbh55hb5h545h5b5h9ch659hf9h545h585h694h54bh55dh96q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/barbican,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvk2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-keystone-listener-5d57b47bd4-2hxfs_openstack(ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:05:55 crc kubenswrapper[4625]: I1202 14:05:55.259106 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-55fdff466d-bbrr5" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 02 14:05:55 crc kubenswrapper[4625]: I1202 14:05:55.259731 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-55fdff466d-bbrr5" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 02 14:05:55 crc kubenswrapper[4625]: E1202 14:05:55.304518 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-keystone-listener-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"barbican-keystone-listener\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified\\\"\"]" pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" podUID="ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d" Dec 02 14:05:55 crc kubenswrapper[4625]: E1202 14:05:55.958896 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-keystone-listener-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified\\\"\", failed to \"StartContainer\" for \"barbican-keystone-listener\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified\\\"\"]" pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" podUID="ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d" Dec 02 14:05:56 crc kubenswrapper[4625]: I1202 14:05:56.096567 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 02 14:05:56 crc kubenswrapper[4625]: I1202 14:05:56.358243 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dc4db5bfb-zbs4l" podUID="92339196-3d33-4b76-9ba2-81e1a8373e84" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 02 14:06:02 crc kubenswrapper[4625]: E1202 14:06:02.066190 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Dec 02 14:06:02 crc kubenswrapper[4625]: E1202 14:06:02.067214 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgpgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(abbd3215-4ced-473b-84a7-1f859e2782b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:06:02 crc kubenswrapper[4625]: I1202 14:06:02.884760 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f7c78dbd6-lsbdb"] Dec 02 14:06:02 crc kubenswrapper[4625]: I1202 14:06:02.885698 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f988fb4d-rk87d"] Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.123678 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f988fb4d-rk87d" event={"ID":"fc88c0ad-8893-4168-bf0c-e9ed829f1b62","Type":"ContainerStarted","Data":"b431947269516f7194e8e9ce3e2816100ba75b2af44fd4fc2a039cd506e010ac"} Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.135670 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" event={"ID":"88761c55-9427-4195-a3c9-c54a5b8554c8","Type":"ContainerStarted","Data":"eb3a3a5ef4dfbd32d4fbe319f8d66ee92abe31f4c57c1969002e6206b2172d39"} Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.136915 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.145652 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f7c78dbd6-lsbdb" event={"ID":"f5f7b2e0-20a9-42b2-b323-4c813153f09f","Type":"ContainerStarted","Data":"05df8d3189f01f3cf113af8e5cb6bfedd99886be4ad2717cbd1d01fb33f6054d"} Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.159715 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b967fc68b-flqxx" event={"ID":"6305c486-3db4-4bea-8d41-555d94ea0e5d","Type":"ContainerStarted","Data":"1a3cf40229a149b6f96d45542d9e2d953ac5c335e7d59fd18c322e334129e014"} Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.160426 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.160910 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.174907 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" podStartSLOduration=22.174876525 podStartE2EDuration="22.174876525s" podCreationTimestamp="2025-12-02 14:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:03.169276921 +0000 UTC m=+1319.131453996" watchObservedRunningTime="2025-12-02 14:06:03.174876525 +0000 UTC m=+1319.137053590" Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.182654 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c4cddcdc-kxpt7" event={"ID":"183dcad1-443e-47e0-bc13-d98d7c316069","Type":"ContainerStarted","Data":"6eaa34cd74f84391226d6815ee7df2715260818bb26f201a8e5398f5e531c599"} Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.257388 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b967fc68b-flqxx" podStartSLOduration=22.257348685 podStartE2EDuration="22.257348685s" podCreationTimestamp="2025-12-02 14:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:03.215850407 +0000 UTC m=+1319.178027502" watchObservedRunningTime="2025-12-02 14:06:03.257348685 +0000 UTC m=+1319.219525760" Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.277791 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8c746598f-ss7rg" Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.366941 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55fdff466d-bbrr5"] Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.367262 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55fdff466d-bbrr5" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-api" containerID="cri-o://d60ba7fb3fbbc9818305e854a7d5224071ef9ea372d4fe05ee68518c27c673db" gracePeriod=30 Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.371375 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-55fdff466d-bbrr5" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-httpd" containerID="cri-o://1f6e6931df756d52e2d5d628fa5ed4121b735dd954716b51a8e38e2024ecfbe1" gracePeriod=30 Dec 02 14:06:03 crc kubenswrapper[4625]: I1202 14:06:03.526677 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-55fdff466d-bbrr5" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.146:9696/\": read tcp 10.217.0.2:33552->10.217.0.146:9696: read: connection reset by peer" Dec 02 14:06:04 crc kubenswrapper[4625]: I1202 14:06:04.206448 4625 generic.go:334] "Generic (PLEG): container finished" podID="e595879c-342f-410e-9ad5-b60498125c2e" containerID="1f6e6931df756d52e2d5d628fa5ed4121b735dd954716b51a8e38e2024ecfbe1" exitCode=0 Dec 02 14:06:04 crc kubenswrapper[4625]: I1202 14:06:04.208473 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55fdff466d-bbrr5" event={"ID":"e595879c-342f-410e-9ad5-b60498125c2e","Type":"ContainerDied","Data":"1f6e6931df756d52e2d5d628fa5ed4121b735dd954716b51a8e38e2024ecfbe1"} Dec 02 14:06:04 crc kubenswrapper[4625]: I1202 14:06:04.242920 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68c4cddcdc-kxpt7" event={"ID":"183dcad1-443e-47e0-bc13-d98d7c316069","Type":"ContainerStarted","Data":"18cd6bc10e7d6eac81114a58b37f61ffbd15c755bd727e4b32a9a93171493d5f"} Dec 02 14:06:04 crc kubenswrapper[4625]: I1202 14:06:04.282941 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f988fb4d-rk87d" event={"ID":"fc88c0ad-8893-4168-bf0c-e9ed829f1b62","Type":"ContainerStarted","Data":"fa2a72951faf49a2fafd04ad98fe8e94c37382ab5b08a718aed1d6668d4093e5"} Dec 02 14:06:04 crc kubenswrapper[4625]: I1202 14:06:04.294751 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f7c78dbd6-lsbdb" event={"ID":"f5f7b2e0-20a9-42b2-b323-4c813153f09f","Type":"ContainerStarted","Data":"1b75843beff4960b3fbdb7c8189438cc0866230599978cab8059e5c9c2f40e56"} Dec 02 14:06:04 crc kubenswrapper[4625]: I1202 14:06:04.295543 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:06:04 crc kubenswrapper[4625]: I1202 14:06:04.312474 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68c4cddcdc-kxpt7" podStartSLOduration=4.9802963590000005 podStartE2EDuration="23.312448918s" podCreationTimestamp="2025-12-02 14:05:41 +0000 UTC" firstStartedPulling="2025-12-02 14:05:43.816528694 +0000 UTC m=+1299.778705769" lastFinishedPulling="2025-12-02 14:06:02.148681253 +0000 UTC m=+1318.110858328" observedRunningTime="2025-12-02 14:06:04.304370396 +0000 UTC m=+1320.266547481" watchObservedRunningTime="2025-12-02 14:06:04.312448918 +0000 UTC m=+1320.274625993" Dec 02 14:06:04 crc kubenswrapper[4625]: I1202 14:06:04.385775 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6f7c78dbd6-lsbdb" podStartSLOduration=19.385744106 podStartE2EDuration="19.385744106s" podCreationTimestamp="2025-12-02 14:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:04.364363891 +0000 UTC m=+1320.326540976" watchObservedRunningTime="2025-12-02 14:06:04.385744106 +0000 UTC m=+1320.347921171" Dec 02 14:06:05 crc kubenswrapper[4625]: I1202 14:06:05.331808 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f988fb4d-rk87d" event={"ID":"fc88c0ad-8893-4168-bf0c-e9ed829f1b62","Type":"ContainerStarted","Data":"1f18fc7e471ec6a28f39338606f5faa95e72643216255febd25bfc4a454129e0"} Dec 02 14:06:05 crc kubenswrapper[4625]: I1202 14:06:05.332353 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:06:05 crc kubenswrapper[4625]: I1202 14:06:05.332386 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:06:05 crc kubenswrapper[4625]: I1202 14:06:05.369093 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f988fb4d-rk87d" podStartSLOduration=19.369058133 podStartE2EDuration="19.369058133s" podCreationTimestamp="2025-12-02 14:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:05.365767393 +0000 UTC m=+1321.327944468" watchObservedRunningTime="2025-12-02 14:06:05.369058133 +0000 UTC m=+1321.331235208" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.096226 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.096763 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.102561 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"f7d7aff050b1cd68f760459d9ee8066bf44a2756b77213a691265525e661240d"} pod="openstack/horizon-5c94877878-jvhxv" containerMessage="Container horizon failed startup probe, will be restarted" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.102649 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" containerID="cri-o://f7d7aff050b1cd68f760459d9ee8066bf44a2756b77213a691265525e661240d" gracePeriod=30 Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.357547 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dc4db5bfb-zbs4l" podUID="92339196-3d33-4b76-9ba2-81e1a8373e84" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.359731 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.360926 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"ecf1871be89bb7259b3396f1b0d15bf2940dc1ca653cceb4173acdb58bbada5d"} pod="openstack/horizon-7dc4db5bfb-zbs4l" containerMessage="Container horizon failed startup probe, will be restarted" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.361558 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7dc4db5bfb-zbs4l" podUID="92339196-3d33-4b76-9ba2-81e1a8373e84" containerName="horizon" containerID="cri-o://ecf1871be89bb7259b3396f1b0d15bf2940dc1ca653cceb4173acdb58bbada5d" gracePeriod=30 Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.398266 4625 generic.go:334] "Generic (PLEG): container finished" podID="e595879c-342f-410e-9ad5-b60498125c2e" containerID="d60ba7fb3fbbc9818305e854a7d5224071ef9ea372d4fe05ee68518c27c673db" exitCode=0 Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.399488 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55fdff466d-bbrr5" event={"ID":"e595879c-342f-410e-9ad5-b60498125c2e","Type":"ContainerDied","Data":"d60ba7fb3fbbc9818305e854a7d5224071ef9ea372d4fe05ee68518c27c673db"} Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.563788 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.664332 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-combined-ca-bundle\") pod \"e595879c-342f-410e-9ad5-b60498125c2e\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.664410 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-config\") pod \"e595879c-342f-410e-9ad5-b60498125c2e\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.664530 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-httpd-config\") pod \"e595879c-342f-410e-9ad5-b60498125c2e\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.664597 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-ovndb-tls-certs\") pod \"e595879c-342f-410e-9ad5-b60498125c2e\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.664687 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlsr8\" (UniqueName: \"kubernetes.io/projected/e595879c-342f-410e-9ad5-b60498125c2e-kube-api-access-jlsr8\") pod \"e595879c-342f-410e-9ad5-b60498125c2e\" (UID: \"e595879c-342f-410e-9ad5-b60498125c2e\") " Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.687454 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e595879c-342f-410e-9ad5-b60498125c2e" (UID: "e595879c-342f-410e-9ad5-b60498125c2e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.698855 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e595879c-342f-410e-9ad5-b60498125c2e-kube-api-access-jlsr8" (OuterVolumeSpecName: "kube-api-access-jlsr8") pod "e595879c-342f-410e-9ad5-b60498125c2e" (UID: "e595879c-342f-410e-9ad5-b60498125c2e"). InnerVolumeSpecName "kube-api-access-jlsr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.769473 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlsr8\" (UniqueName: \"kubernetes.io/projected/e595879c-342f-410e-9ad5-b60498125c2e-kube-api-access-jlsr8\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.769956 4625 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.853848 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-config" (OuterVolumeSpecName: "config") pod "e595879c-342f-410e-9ad5-b60498125c2e" (UID: "e595879c-342f-410e-9ad5-b60498125c2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.871772 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.897747 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e595879c-342f-410e-9ad5-b60498125c2e" (UID: "e595879c-342f-410e-9ad5-b60498125c2e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.973888 4625 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:06 crc kubenswrapper[4625]: I1202 14:06:06.998410 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e595879c-342f-410e-9ad5-b60498125c2e" (UID: "e595879c-342f-410e-9ad5-b60498125c2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.078143 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e595879c-342f-410e-9ad5-b60498125c2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.349915 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.509487 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vz4hz"] Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.511188 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55fdff466d-bbrr5" event={"ID":"e595879c-342f-410e-9ad5-b60498125c2e","Type":"ContainerDied","Data":"b71d8a8a0b8fa005c21131aaa239982655642516e2031c5148ce5dd5bc191503"} Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.510774 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55fdff466d-bbrr5" Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.511357 4625 scope.go:117] "RemoveContainer" containerID="1f6e6931df756d52e2d5d628fa5ed4121b735dd954716b51a8e38e2024ecfbe1" Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.513647 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" podUID="2becaaf4-3b57-4612-b20e-ef1b93b563d1" containerName="dnsmasq-dns" containerID="cri-o://9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d" gracePeriod=10 Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.537614 4625 generic.go:334] "Generic (PLEG): container finished" podID="c29ce362-3978-4713-833d-49aab29a394c" containerID="da74b31f811c5383a2f75c8c86c15ae7cd8f586c752c735f8fdb5f8d2612694e" exitCode=0 Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.537684 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-blvbp" event={"ID":"c29ce362-3978-4713-833d-49aab29a394c","Type":"ContainerDied","Data":"da74b31f811c5383a2f75c8c86c15ae7cd8f586c752c735f8fdb5f8d2612694e"} Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.636970 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55fdff466d-bbrr5"] Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.645902 4625 scope.go:117] "RemoveContainer" containerID="d60ba7fb3fbbc9818305e854a7d5224071ef9ea372d4fe05ee68518c27c673db" Dec 02 14:06:07 crc kubenswrapper[4625]: I1202 14:06:07.708198 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-55fdff466d-bbrr5"] Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.204535 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.453179 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.540425 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-nb\") pod \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.540509 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-svc\") pod \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.540659 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-sb\") pod \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.540701 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-config\") pod \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.540722 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt72m\" (UniqueName: \"kubernetes.io/projected/2becaaf4-3b57-4612-b20e-ef1b93b563d1-kube-api-access-qt72m\") pod \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.540775 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-swift-storage-0\") pod \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\" (UID: \"2becaaf4-3b57-4612-b20e-ef1b93b563d1\") " Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.602083 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" event={"ID":"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d","Type":"ContainerStarted","Data":"8aa8a9166ce8034a356ebdff034bcbcbfb88a5858030052fd8debdf4e485f932"} Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.609855 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2becaaf4-3b57-4612-b20e-ef1b93b563d1-kube-api-access-qt72m" (OuterVolumeSpecName: "kube-api-access-qt72m") pod "2becaaf4-3b57-4612-b20e-ef1b93b563d1" (UID: "2becaaf4-3b57-4612-b20e-ef1b93b563d1"). InnerVolumeSpecName "kube-api-access-qt72m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.617600 4625 generic.go:334] "Generic (PLEG): container finished" podID="2becaaf4-3b57-4612-b20e-ef1b93b563d1" containerID="9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d" exitCode=0 Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.619719 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.621422 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" event={"ID":"2becaaf4-3b57-4612-b20e-ef1b93b563d1","Type":"ContainerDied","Data":"9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d"} Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.621587 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vz4hz" event={"ID":"2becaaf4-3b57-4612-b20e-ef1b93b563d1","Type":"ContainerDied","Data":"5ec9126f27d407eb7a156e009e0c1a3c6c8d496830ffae73d6bb07cee79d3348"} Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.621707 4625 scope.go:117] "RemoveContainer" containerID="9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d" Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.646936 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt72m\" (UniqueName: \"kubernetes.io/projected/2becaaf4-3b57-4612-b20e-ef1b93b563d1-kube-api-access-qt72m\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.691631 4625 scope.go:117] "RemoveContainer" containerID="d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8" Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.724515 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:08 crc kubenswrapper[4625]: I1202 14:06:08.892886 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e595879c-342f-410e-9ad5-b60498125c2e" path="/var/lib/kubelet/pods/e595879c-342f-410e-9ad5-b60498125c2e/volumes" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.252189 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-config" (OuterVolumeSpecName: "config") pod "2becaaf4-3b57-4612-b20e-ef1b93b563d1" (UID: "2becaaf4-3b57-4612-b20e-ef1b93b563d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.299939 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.308287 4625 scope.go:117] "RemoveContainer" containerID="9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.329235 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2becaaf4-3b57-4612-b20e-ef1b93b563d1" (UID: "2becaaf4-3b57-4612-b20e-ef1b93b563d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: E1202 14:06:09.329954 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d\": container with ID starting with 9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d not found: ID does not exist" containerID="9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.330011 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d"} err="failed to get container status \"9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d\": rpc error: code = NotFound desc = could not find container \"9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d\": container with ID starting with 9b995a59b8303f5f74b62222e34d3ab993e62497bd73b859b1444149cc24eb8d not found: ID does not exist" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.330047 4625 scope.go:117] "RemoveContainer" containerID="d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8" Dec 02 14:06:09 crc kubenswrapper[4625]: E1202 14:06:09.342563 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8\": container with ID starting with d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8 not found: ID does not exist" containerID="d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.343653 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8"} err="failed to get container status \"d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8\": rpc error: code = NotFound desc = could not find container \"d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8\": container with ID starting with d887f0d1de24b98c0613b9c979fa0844575a0637aa5ef8f10df3baf3070d4ea8 not found: ID does not exist" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.372393 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2becaaf4-3b57-4612-b20e-ef1b93b563d1" (UID: "2becaaf4-3b57-4612-b20e-ef1b93b563d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.392352 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2becaaf4-3b57-4612-b20e-ef1b93b563d1" (UID: "2becaaf4-3b57-4612-b20e-ef1b93b563d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.403589 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.403628 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.403639 4625 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.411011 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-blvbp" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.425403 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2becaaf4-3b57-4612-b20e-ef1b93b563d1" (UID: "2becaaf4-3b57-4612-b20e-ef1b93b563d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.506207 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-config-data\") pod \"c29ce362-3978-4713-833d-49aab29a394c\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.506335 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g65z6\" (UniqueName: \"kubernetes.io/projected/c29ce362-3978-4713-833d-49aab29a394c-kube-api-access-g65z6\") pod \"c29ce362-3978-4713-833d-49aab29a394c\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.506523 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-combined-ca-bundle\") pod \"c29ce362-3978-4713-833d-49aab29a394c\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.506607 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-db-sync-config-data\") pod \"c29ce362-3978-4713-833d-49aab29a394c\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.506632 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-scripts\") pod \"c29ce362-3978-4713-833d-49aab29a394c\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.506763 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29ce362-3978-4713-833d-49aab29a394c-etc-machine-id\") pod \"c29ce362-3978-4713-833d-49aab29a394c\" (UID: \"c29ce362-3978-4713-833d-49aab29a394c\") " Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.507351 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2becaaf4-3b57-4612-b20e-ef1b93b563d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.507469 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29ce362-3978-4713-833d-49aab29a394c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c29ce362-3978-4713-833d-49aab29a394c" (UID: "c29ce362-3978-4713-833d-49aab29a394c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.523918 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c29ce362-3978-4713-833d-49aab29a394c" (UID: "c29ce362-3978-4713-833d-49aab29a394c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.524899 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29ce362-3978-4713-833d-49aab29a394c-kube-api-access-g65z6" (OuterVolumeSpecName: "kube-api-access-g65z6") pod "c29ce362-3978-4713-833d-49aab29a394c" (UID: "c29ce362-3978-4713-833d-49aab29a394c"). InnerVolumeSpecName "kube-api-access-g65z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.529479 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-scripts" (OuterVolumeSpecName: "scripts") pod "c29ce362-3978-4713-833d-49aab29a394c" (UID: "c29ce362-3978-4713-833d-49aab29a394c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.582168 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vz4hz"] Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.588303 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c29ce362-3978-4713-833d-49aab29a394c" (UID: "c29ce362-3978-4713-833d-49aab29a394c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.610133 4625 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29ce362-3978-4713-833d-49aab29a394c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.610182 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g65z6\" (UniqueName: \"kubernetes.io/projected/c29ce362-3978-4713-833d-49aab29a394c-kube-api-access-g65z6\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.610195 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.610204 4625 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.610214 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.611434 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vz4hz"] Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.664411 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-blvbp" event={"ID":"c29ce362-3978-4713-833d-49aab29a394c","Type":"ContainerDied","Data":"0d14af64aecf7fe87770ecfce2961b50335df6b3ae7d09489d352aacebb089e6"} Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.664465 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d14af64aecf7fe87770ecfce2961b50335df6b3ae7d09489d352aacebb089e6" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.664559 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-blvbp" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.668488 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" event={"ID":"ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d","Type":"ContainerStarted","Data":"17ac564d58ce3c99eadfab266e4fca549000aa44bd3b17777c91382f84f6bce1"} Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.669276 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-config-data" (OuterVolumeSpecName: "config-data") pod "c29ce362-3978-4713-833d-49aab29a394c" (UID: "c29ce362-3978-4713-833d-49aab29a394c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.704793 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5d57b47bd4-2hxfs" podStartSLOduration=4.008507948 podStartE2EDuration="28.704768047s" podCreationTimestamp="2025-12-02 14:05:41 +0000 UTC" firstStartedPulling="2025-12-02 14:05:43.14896978 +0000 UTC m=+1299.111146855" lastFinishedPulling="2025-12-02 14:06:07.845229879 +0000 UTC m=+1323.807406954" observedRunningTime="2025-12-02 14:06:09.699218145 +0000 UTC m=+1325.661395220" watchObservedRunningTime="2025-12-02 14:06:09.704768047 +0000 UTC m=+1325.666945122" Dec 02 14:06:09 crc kubenswrapper[4625]: I1202 14:06:09.712898 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29ce362-3978-4713-833d-49aab29a394c-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.073647 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:06:10 crc kubenswrapper[4625]: E1202 14:06:10.076000 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29ce362-3978-4713-833d-49aab29a394c" containerName="cinder-db-sync" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.076044 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29ce362-3978-4713-833d-49aab29a394c" containerName="cinder-db-sync" Dec 02 14:06:10 crc kubenswrapper[4625]: E1202 14:06:10.076077 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-api" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.076088 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-api" Dec 02 14:06:10 crc kubenswrapper[4625]: E1202 14:06:10.076105 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2becaaf4-3b57-4612-b20e-ef1b93b563d1" containerName="init" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.076113 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="2becaaf4-3b57-4612-b20e-ef1b93b563d1" containerName="init" Dec 02 14:06:10 crc kubenswrapper[4625]: E1202 14:06:10.076150 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-httpd" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.076163 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-httpd" Dec 02 14:06:10 crc kubenswrapper[4625]: E1202 14:06:10.076179 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2becaaf4-3b57-4612-b20e-ef1b93b563d1" containerName="dnsmasq-dns" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.076191 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="2becaaf4-3b57-4612-b20e-ef1b93b563d1" containerName="dnsmasq-dns" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.076559 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-api" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.076579 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="2becaaf4-3b57-4612-b20e-ef1b93b563d1" containerName="dnsmasq-dns" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.076588 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="e595879c-342f-410e-9ad5-b60498125c2e" containerName="neutron-httpd" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.076601 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29ce362-3978-4713-833d-49aab29a394c" containerName="cinder-db-sync" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.079176 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.088134 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.088546 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.088960 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.089053 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-j66qt" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.130046 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.156104 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f730bcc-709b-4d75-9d55-579a852a5855-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.156189 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.156213 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlsv\" (UniqueName: \"kubernetes.io/projected/3f730bcc-709b-4d75-9d55-579a852a5855-kube-api-access-rrlsv\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.156271 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.156411 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.156444 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.252783 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lcqjf"] Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.260695 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263200 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263246 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263290 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263330 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263356 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263383 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263416 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f730bcc-709b-4d75-9d55-579a852a5855-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263455 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4cz8\" (UniqueName: \"kubernetes.io/projected/cbaa9675-d1dc-4b23-962a-5607cbacad8d-kube-api-access-g4cz8\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263473 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263497 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263513 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrlsv\" (UniqueName: \"kubernetes.io/projected/3f730bcc-709b-4d75-9d55-579a852a5855-kube-api-access-rrlsv\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.263528 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-config\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.264500 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f730bcc-709b-4d75-9d55-579a852a5855-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.275903 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.281463 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.286169 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.293296 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lcqjf"] Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.297761 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.306584 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrlsv\" (UniqueName: \"kubernetes.io/projected/3f730bcc-709b-4d75-9d55-579a852a5855-kube-api-access-rrlsv\") pod \"cinder-scheduler-0\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.364466 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.364586 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.364610 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4cz8\" (UniqueName: \"kubernetes.io/projected/cbaa9675-d1dc-4b23-962a-5607cbacad8d-kube-api-access-g4cz8\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.364641 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-config\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.364710 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.364785 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.366594 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-config\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.367003 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.367411 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.368428 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.369619 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.403493 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4cz8\" (UniqueName: \"kubernetes.io/projected/cbaa9675-d1dc-4b23-962a-5607cbacad8d-kube-api-access-g4cz8\") pod \"dnsmasq-dns-5c9776ccc5-lcqjf\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.429106 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.470856 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.679446 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.682299 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.687687 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.755896 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.778271 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data-custom\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.778334 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btgnj\" (UniqueName: \"kubernetes.io/projected/2306d020-a207-453c-afb9-cd70e5b04e22-kube-api-access-btgnj\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.778418 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.778450 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-scripts\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.778463 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.778520 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2306d020-a207-453c-afb9-cd70e5b04e22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.778547 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2306d020-a207-453c-afb9-cd70e5b04e22-logs\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.880406 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data-custom\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.881285 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2becaaf4-3b57-4612-b20e-ef1b93b563d1" path="/var/lib/kubelet/pods/2becaaf4-3b57-4612-b20e-ef1b93b563d1/volumes" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.882039 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btgnj\" (UniqueName: \"kubernetes.io/projected/2306d020-a207-453c-afb9-cd70e5b04e22-kube-api-access-btgnj\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.882138 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.882192 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.882209 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-scripts\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.882276 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2306d020-a207-453c-afb9-cd70e5b04e22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.882303 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2306d020-a207-453c-afb9-cd70e5b04e22-logs\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.882837 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2306d020-a207-453c-afb9-cd70e5b04e22-logs\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.888063 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data-custom\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.893600 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2306d020-a207-453c-afb9-cd70e5b04e22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.899492 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.900287 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.904909 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-scripts\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:10 crc kubenswrapper[4625]: I1202 14:06:10.908132 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btgnj\" (UniqueName: \"kubernetes.io/projected/2306d020-a207-453c-afb9-cd70e5b04e22-kube-api-access-btgnj\") pod \"cinder-api-0\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " pod="openstack/cinder-api-0" Dec 02 14:06:11 crc kubenswrapper[4625]: I1202 14:06:11.160111 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:06:11 crc kubenswrapper[4625]: I1202 14:06:11.536631 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:06:11 crc kubenswrapper[4625]: W1202 14:06:11.601159 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f730bcc_709b_4d75_9d55_579a852a5855.slice/crio-f8dafcfb10430d9c2e38a0ecfc6974913ad341463567a57f8298658dd9668458 WatchSource:0}: Error finding container f8dafcfb10430d9c2e38a0ecfc6974913ad341463567a57f8298658dd9668458: Status 404 returned error can't find the container with id f8dafcfb10430d9c2e38a0ecfc6974913ad341463567a57f8298658dd9668458 Dec 02 14:06:11 crc kubenswrapper[4625]: I1202 14:06:11.719017 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lcqjf"] Dec 02 14:06:11 crc kubenswrapper[4625]: W1202 14:06:11.753343 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbaa9675_d1dc_4b23_962a_5607cbacad8d.slice/crio-e1227cf931738827ade4b73fb18e0ad558a712110f44343186d49884234bd4b5 WatchSource:0}: Error finding container e1227cf931738827ade4b73fb18e0ad558a712110f44343186d49884234bd4b5: Status 404 returned error can't find the container with id e1227cf931738827ade4b73fb18e0ad558a712110f44343186d49884234bd4b5 Dec 02 14:06:11 crc kubenswrapper[4625]: I1202 14:06:11.791661 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f730bcc-709b-4d75-9d55-579a852a5855","Type":"ContainerStarted","Data":"f8dafcfb10430d9c2e38a0ecfc6974913ad341463567a57f8298658dd9668458"} Dec 02 14:06:12 crc kubenswrapper[4625]: I1202 14:06:12.004233 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:06:12 crc kubenswrapper[4625]: W1202 14:06:12.039986 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2306d020_a207_453c_afb9_cd70e5b04e22.slice/crio-842fbb23e8e214013efe9722bb9d9d5022907cdaf9d0085f2a19dd66c0333808 WatchSource:0}: Error finding container 842fbb23e8e214013efe9722bb9d9d5022907cdaf9d0085f2a19dd66c0333808: Status 404 returned error can't find the container with id 842fbb23e8e214013efe9722bb9d9d5022907cdaf9d0085f2a19dd66c0333808 Dec 02 14:06:12 crc kubenswrapper[4625]: I1202 14:06:12.721746 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:12 crc kubenswrapper[4625]: I1202 14:06:12.823280 4625 generic.go:334] "Generic (PLEG): container finished" podID="cbaa9675-d1dc-4b23-962a-5607cbacad8d" containerID="3ab622c63cde7a273919e28d09e24c66203621f466f62b504e734d5b7b231015" exitCode=0 Dec 02 14:06:12 crc kubenswrapper[4625]: I1202 14:06:12.823480 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" event={"ID":"cbaa9675-d1dc-4b23-962a-5607cbacad8d","Type":"ContainerDied","Data":"3ab622c63cde7a273919e28d09e24c66203621f466f62b504e734d5b7b231015"} Dec 02 14:06:12 crc kubenswrapper[4625]: I1202 14:06:12.823530 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" event={"ID":"cbaa9675-d1dc-4b23-962a-5607cbacad8d","Type":"ContainerStarted","Data":"e1227cf931738827ade4b73fb18e0ad558a712110f44343186d49884234bd4b5"} Dec 02 14:06:12 crc kubenswrapper[4625]: I1202 14:06:12.843546 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2306d020-a207-453c-afb9-cd70e5b04e22","Type":"ContainerStarted","Data":"842fbb23e8e214013efe9722bb9d9d5022907cdaf9d0085f2a19dd66c0333808"} Dec 02 14:06:13 crc kubenswrapper[4625]: I1202 14:06:13.036677 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f988fb4d-rk87d" podUID="fc88c0ad-8893-4168-bf0c-e9ed829f1b62" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:13 crc kubenswrapper[4625]: I1202 14:06:13.252643 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:13 crc kubenswrapper[4625]: I1202 14:06:13.768754 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:13 crc kubenswrapper[4625]: I1202 14:06:13.912876 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" event={"ID":"cbaa9675-d1dc-4b23-962a-5607cbacad8d","Type":"ContainerStarted","Data":"486163bcae764c6e4412db6f131005b340e165fd246b56105767872ce0ec7db0"} Dec 02 14:06:13 crc kubenswrapper[4625]: I1202 14:06:13.914687 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:13 crc kubenswrapper[4625]: I1202 14:06:13.949513 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" podStartSLOduration=3.949483559 podStartE2EDuration="3.949483559s" podCreationTimestamp="2025-12-02 14:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:13.932953456 +0000 UTC m=+1329.895130531" watchObservedRunningTime="2025-12-02 14:06:13.949483559 +0000 UTC m=+1329.911660634" Dec 02 14:06:14 crc kubenswrapper[4625]: I1202 14:06:14.731483 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:14 crc kubenswrapper[4625]: I1202 14:06:14.972363 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f730bcc-709b-4d75-9d55-579a852a5855","Type":"ContainerStarted","Data":"5e83dc7b7fdf96cd7bac9f6787e4ccf5f911ecfec30c3db5c9ed177836a6e6dc"} Dec 02 14:06:14 crc kubenswrapper[4625]: I1202 14:06:14.982294 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2306d020-a207-453c-afb9-cd70e5b04e22","Type":"ContainerStarted","Data":"65aad30b4c43e37e51469a3107876413b877a6eaf6e826292f03eef18f051936"} Dec 02 14:06:15 crc kubenswrapper[4625]: I1202 14:06:15.977034 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:06:16 crc kubenswrapper[4625]: I1202 14:06:16.009484 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f988fb4d-rk87d" podUID="fc88c0ad-8893-4168-bf0c-e9ed829f1b62" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:16 crc kubenswrapper[4625]: I1202 14:06:16.012916 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2306d020-a207-453c-afb9-cd70e5b04e22","Type":"ContainerStarted","Data":"7ec2e0464c6c6f528d20da8720f915c9b870f587c9aafed680c5d5bfd10a6b07"} Dec 02 14:06:16 crc kubenswrapper[4625]: I1202 14:06:16.013434 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 02 14:06:16 crc kubenswrapper[4625]: I1202 14:06:16.033707 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f730bcc-709b-4d75-9d55-579a852a5855","Type":"ContainerStarted","Data":"28d747eb014b8634711c10ec401da7a4968d468a1066623e6a3162f685292536"} Dec 02 14:06:16 crc kubenswrapper[4625]: I1202 14:06:16.067605 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.067578953 podStartE2EDuration="6.067578953s" podCreationTimestamp="2025-12-02 14:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:16.045196199 +0000 UTC m=+1332.007373284" watchObservedRunningTime="2025-12-02 14:06:16.067578953 +0000 UTC m=+1332.029756028" Dec 02 14:06:16 crc kubenswrapper[4625]: I1202 14:06:16.095539 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.765956683 podStartE2EDuration="7.095508268s" podCreationTimestamp="2025-12-02 14:06:09 +0000 UTC" firstStartedPulling="2025-12-02 14:06:11.621273247 +0000 UTC m=+1327.583450322" lastFinishedPulling="2025-12-02 14:06:12.950824832 +0000 UTC m=+1328.913001907" observedRunningTime="2025-12-02 14:06:16.090351987 +0000 UTC m=+1332.052529072" watchObservedRunningTime="2025-12-02 14:06:16.095508268 +0000 UTC m=+1332.057685343" Dec 02 14:06:17 crc kubenswrapper[4625]: I1202 14:06:17.009874 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f988fb4d-rk87d" podUID="fc88c0ad-8893-4168-bf0c-e9ed829f1b62" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:17 crc kubenswrapper[4625]: I1202 14:06:17.009913 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f988fb4d-rk87d" podUID="fc88c0ad-8893-4168-bf0c-e9ed829f1b62" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:17 crc kubenswrapper[4625]: I1202 14:06:17.050290 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2306d020-a207-453c-afb9-cd70e5b04e22" containerName="cinder-api" containerID="cri-o://7ec2e0464c6c6f528d20da8720f915c9b870f587c9aafed680c5d5bfd10a6b07" gracePeriod=30 Dec 02 14:06:17 crc kubenswrapper[4625]: I1202 14:06:17.050447 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2306d020-a207-453c-afb9-cd70e5b04e22" containerName="cinder-api-log" containerID="cri-o://65aad30b4c43e37e51469a3107876413b877a6eaf6e826292f03eef18f051936" gracePeriod=30 Dec 02 14:06:17 crc kubenswrapper[4625]: I1202 14:06:17.459987 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:06:17 crc kubenswrapper[4625]: I1202 14:06:17.598134 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5855fb4fd-8xvmf" Dec 02 14:06:17 crc kubenswrapper[4625]: I1202 14:06:17.765625 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:17 crc kubenswrapper[4625]: I1202 14:06:17.783275 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:06:18 crc kubenswrapper[4625]: I1202 14:06:18.048691 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f988fb4d-rk87d" podUID="fc88c0ad-8893-4168-bf0c-e9ed829f1b62" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:18 crc kubenswrapper[4625]: I1202 14:06:18.078032 4625 generic.go:334] "Generic (PLEG): container finished" podID="2306d020-a207-453c-afb9-cd70e5b04e22" containerID="7ec2e0464c6c6f528d20da8720f915c9b870f587c9aafed680c5d5bfd10a6b07" exitCode=0 Dec 02 14:06:18 crc kubenswrapper[4625]: I1202 14:06:18.078090 4625 generic.go:334] "Generic (PLEG): container finished" podID="2306d020-a207-453c-afb9-cd70e5b04e22" containerID="65aad30b4c43e37e51469a3107876413b877a6eaf6e826292f03eef18f051936" exitCode=143 Dec 02 14:06:18 crc kubenswrapper[4625]: I1202 14:06:18.078327 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2306d020-a207-453c-afb9-cd70e5b04e22","Type":"ContainerDied","Data":"7ec2e0464c6c6f528d20da8720f915c9b870f587c9aafed680c5d5bfd10a6b07"} Dec 02 14:06:18 crc kubenswrapper[4625]: I1202 14:06:18.078394 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2306d020-a207-453c-afb9-cd70e5b04e22","Type":"ContainerDied","Data":"65aad30b4c43e37e51469a3107876413b877a6eaf6e826292f03eef18f051936"} Dec 02 14:06:18 crc kubenswrapper[4625]: I1202 14:06:18.297470 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:18 crc kubenswrapper[4625]: I1202 14:06:18.316505 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:06:18 crc kubenswrapper[4625]: I1202 14:06:18.863110 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.008076 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data\") pod \"2306d020-a207-453c-afb9-cd70e5b04e22\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.008174 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btgnj\" (UniqueName: \"kubernetes.io/projected/2306d020-a207-453c-afb9-cd70e5b04e22-kube-api-access-btgnj\") pod \"2306d020-a207-453c-afb9-cd70e5b04e22\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.008289 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2306d020-a207-453c-afb9-cd70e5b04e22-etc-machine-id\") pod \"2306d020-a207-453c-afb9-cd70e5b04e22\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.008448 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-combined-ca-bundle\") pod \"2306d020-a207-453c-afb9-cd70e5b04e22\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.008566 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-scripts\") pod \"2306d020-a207-453c-afb9-cd70e5b04e22\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.008633 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2306d020-a207-453c-afb9-cd70e5b04e22-logs\") pod \"2306d020-a207-453c-afb9-cd70e5b04e22\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.008671 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data-custom\") pod \"2306d020-a207-453c-afb9-cd70e5b04e22\" (UID: \"2306d020-a207-453c-afb9-cd70e5b04e22\") " Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.028600 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2306d020-a207-453c-afb9-cd70e5b04e22-kube-api-access-btgnj" (OuterVolumeSpecName: "kube-api-access-btgnj") pod "2306d020-a207-453c-afb9-cd70e5b04e22" (UID: "2306d020-a207-453c-afb9-cd70e5b04e22"). InnerVolumeSpecName "kube-api-access-btgnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.034445 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2306d020-a207-453c-afb9-cd70e5b04e22-logs" (OuterVolumeSpecName: "logs") pod "2306d020-a207-453c-afb9-cd70e5b04e22" (UID: "2306d020-a207-453c-afb9-cd70e5b04e22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.034561 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2306d020-a207-453c-afb9-cd70e5b04e22-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2306d020-a207-453c-afb9-cd70e5b04e22" (UID: "2306d020-a207-453c-afb9-cd70e5b04e22"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.062236 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2306d020-a207-453c-afb9-cd70e5b04e22" (UID: "2306d020-a207-453c-afb9-cd70e5b04e22"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.077736 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-scripts" (OuterVolumeSpecName: "scripts") pod "2306d020-a207-453c-afb9-cd70e5b04e22" (UID: "2306d020-a207-453c-afb9-cd70e5b04e22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.116015 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.116504 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2306d020-a207-453c-afb9-cd70e5b04e22-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.118482 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2306d020-a207-453c-afb9-cd70e5b04e22","Type":"ContainerDied","Data":"842fbb23e8e214013efe9722bb9d9d5022907cdaf9d0085f2a19dd66c0333808"} Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.118569 4625 scope.go:117] "RemoveContainer" containerID="7ec2e0464c6c6f528d20da8720f915c9b870f587c9aafed680c5d5bfd10a6b07" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.118776 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.121131 4625 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.121176 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btgnj\" (UniqueName: \"kubernetes.io/projected/2306d020-a207-453c-afb9-cd70e5b04e22-kube-api-access-btgnj\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.121207 4625 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2306d020-a207-453c-afb9-cd70e5b04e22-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.199966 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data" (OuterVolumeSpecName: "config-data") pod "2306d020-a207-453c-afb9-cd70e5b04e22" (UID: "2306d020-a207-453c-afb9-cd70e5b04e22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.213117 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2306d020-a207-453c-afb9-cd70e5b04e22" (UID: "2306d020-a207-453c-afb9-cd70e5b04e22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.222919 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.223418 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2306d020-a207-453c-afb9-cd70e5b04e22-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.299620 4625 scope.go:117] "RemoveContainer" containerID="65aad30b4c43e37e51469a3107876413b877a6eaf6e826292f03eef18f051936" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.467559 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.481467 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.507734 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:06:19 crc kubenswrapper[4625]: E1202 14:06:19.508239 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2306d020-a207-453c-afb9-cd70e5b04e22" containerName="cinder-api-log" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.508258 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="2306d020-a207-453c-afb9-cd70e5b04e22" containerName="cinder-api-log" Dec 02 14:06:19 crc kubenswrapper[4625]: E1202 14:06:19.508321 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2306d020-a207-453c-afb9-cd70e5b04e22" containerName="cinder-api" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.508328 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="2306d020-a207-453c-afb9-cd70e5b04e22" containerName="cinder-api" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.508533 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="2306d020-a207-453c-afb9-cd70e5b04e22" containerName="cinder-api" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.508550 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="2306d020-a207-453c-afb9-cd70e5b04e22" containerName="cinder-api-log" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.509809 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.515873 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.516126 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.525750 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.564820 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.644684 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d2c435c-5496-4ec7-ac3f-eab4e5728204-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.644966 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.645183 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.645291 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-scripts\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.645342 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.645586 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d2c435c-5496-4ec7-ac3f-eab4e5728204-logs\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.645721 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdfg8\" (UniqueName: \"kubernetes.io/projected/3d2c435c-5496-4ec7-ac3f-eab4e5728204-kube-api-access-bdfg8\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.645772 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.645796 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-config-data\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.748434 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.748512 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-config-data\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.748608 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d2c435c-5496-4ec7-ac3f-eab4e5728204-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.748645 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.748687 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.748723 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-scripts\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.748743 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.748833 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d2c435c-5496-4ec7-ac3f-eab4e5728204-logs\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.748896 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdfg8\" (UniqueName: \"kubernetes.io/projected/3d2c435c-5496-4ec7-ac3f-eab4e5728204-kube-api-access-bdfg8\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.753737 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d2c435c-5496-4ec7-ac3f-eab4e5728204-logs\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.753840 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d2c435c-5496-4ec7-ac3f-eab4e5728204-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.758814 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-config-data\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.760126 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.762083 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.762191 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.767248 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-scripts\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.769596 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d2c435c-5496-4ec7-ac3f-eab4e5728204-config-data-custom\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.781058 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdfg8\" (UniqueName: \"kubernetes.io/projected/3d2c435c-5496-4ec7-ac3f-eab4e5728204-kube-api-access-bdfg8\") pod \"cinder-api-0\" (UID: \"3d2c435c-5496-4ec7-ac3f-eab4e5728204\") " pod="openstack/cinder-api-0" Dec 02 14:06:19 crc kubenswrapper[4625]: I1202 14:06:19.883964 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 02 14:06:20 crc kubenswrapper[4625]: I1202 14:06:20.431132 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 02 14:06:20 crc kubenswrapper[4625]: I1202 14:06:20.454572 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="3f730bcc-709b-4d75-9d55-579a852a5855" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.160:8080/\": dial tcp 10.217.0.160:8080: connect: connection refused" Dec 02 14:06:20 crc kubenswrapper[4625]: I1202 14:06:20.473682 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:06:20 crc kubenswrapper[4625]: I1202 14:06:20.571986 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-n2q97"] Dec 02 14:06:20 crc kubenswrapper[4625]: I1202 14:06:20.572637 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" podUID="88761c55-9427-4195-a3c9-c54a5b8554c8" containerName="dnsmasq-dns" containerID="cri-o://eb3a3a5ef4dfbd32d4fbe319f8d66ee92abe31f4c57c1969002e6206b2172d39" gracePeriod=10 Dec 02 14:06:20 crc kubenswrapper[4625]: W1202 14:06:20.647764 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d2c435c_5496_4ec7_ac3f_eab4e5728204.slice/crio-9e20fdbcfa9dc4cd5c6ffa11ca177f61f38c6fddd8c47db73083db552abe0e22 WatchSource:0}: Error finding container 9e20fdbcfa9dc4cd5c6ffa11ca177f61f38c6fddd8c47db73083db552abe0e22: Status 404 returned error can't find the container with id 9e20fdbcfa9dc4cd5c6ffa11ca177f61f38c6fddd8c47db73083db552abe0e22 Dec 02 14:06:20 crc kubenswrapper[4625]: I1202 14:06:20.697105 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 02 14:06:20 crc kubenswrapper[4625]: I1202 14:06:20.879280 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2306d020-a207-453c-afb9-cd70e5b04e22" path="/var/lib/kubelet/pods/2306d020-a207-453c-afb9-cd70e5b04e22/volumes" Dec 02 14:06:21 crc kubenswrapper[4625]: I1202 14:06:21.016608 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f988fb4d-rk87d" podUID="fc88c0ad-8893-4168-bf0c-e9ed829f1b62" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:21 crc kubenswrapper[4625]: I1202 14:06:21.237689 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d2c435c-5496-4ec7-ac3f-eab4e5728204","Type":"ContainerStarted","Data":"9e20fdbcfa9dc4cd5c6ffa11ca177f61f38c6fddd8c47db73083db552abe0e22"} Dec 02 14:06:21 crc kubenswrapper[4625]: I1202 14:06:21.261096 4625 generic.go:334] "Generic (PLEG): container finished" podID="88761c55-9427-4195-a3c9-c54a5b8554c8" containerID="eb3a3a5ef4dfbd32d4fbe319f8d66ee92abe31f4c57c1969002e6206b2172d39" exitCode=0 Dec 02 14:06:21 crc kubenswrapper[4625]: I1202 14:06:21.261159 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" event={"ID":"88761c55-9427-4195-a3c9-c54a5b8554c8","Type":"ContainerDied","Data":"eb3a3a5ef4dfbd32d4fbe319f8d66ee92abe31f4c57c1969002e6206b2172d39"} Dec 02 14:06:21 crc kubenswrapper[4625]: I1202 14:06:21.966279 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.027386 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f988fb4d-rk87d" podUID="fc88c0ad-8893-4168-bf0c-e9ed829f1b62" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.027902 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f988fb4d-rk87d" podUID="fc88c0ad-8893-4168-bf0c-e9ed829f1b62" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.049382 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-nb\") pod \"88761c55-9427-4195-a3c9-c54a5b8554c8\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.049463 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-swift-storage-0\") pod \"88761c55-9427-4195-a3c9-c54a5b8554c8\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.049513 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn864\" (UniqueName: \"kubernetes.io/projected/88761c55-9427-4195-a3c9-c54a5b8554c8-kube-api-access-vn864\") pod \"88761c55-9427-4195-a3c9-c54a5b8554c8\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.049627 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-config\") pod \"88761c55-9427-4195-a3c9-c54a5b8554c8\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.049701 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-sb\") pod \"88761c55-9427-4195-a3c9-c54a5b8554c8\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.049776 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-svc\") pod \"88761c55-9427-4195-a3c9-c54a5b8554c8\" (UID: \"88761c55-9427-4195-a3c9-c54a5b8554c8\") " Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.095906 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.096570 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88761c55-9427-4195-a3c9-c54a5b8554c8-kube-api-access-vn864" (OuterVolumeSpecName: "kube-api-access-vn864") pod "88761c55-9427-4195-a3c9-c54a5b8554c8" (UID: "88761c55-9427-4195-a3c9-c54a5b8554c8"). InnerVolumeSpecName "kube-api-access-vn864". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.148406 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f988fb4d-rk87d" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.160098 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn864\" (UniqueName: \"kubernetes.io/projected/88761c55-9427-4195-a3c9-c54a5b8554c8-kube-api-access-vn864\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.235824 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88761c55-9427-4195-a3c9-c54a5b8554c8" (UID: "88761c55-9427-4195-a3c9-c54a5b8554c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.267154 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.310236 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-config" (OuterVolumeSpecName: "config") pod "88761c55-9427-4195-a3c9-c54a5b8554c8" (UID: "88761c55-9427-4195-a3c9-c54a5b8554c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.321935 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "88761c55-9427-4195-a3c9-c54a5b8554c8" (UID: "88761c55-9427-4195-a3c9-c54a5b8554c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.379511 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b967fc68b-flqxx"] Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.379971 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" containerID="cri-o://7155cfa7de55070cff7a7874be6fc821cc88ed501a0c167b3af01a33182482be" gracePeriod=30 Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.381079 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api" containerID="cri-o://1a3cf40229a149b6f96d45542d9e2d953ac5c335e7d59fd18c322e334129e014" gracePeriod=30 Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.384238 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.385050 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-n2q97" event={"ID":"88761c55-9427-4195-a3c9-c54a5b8554c8","Type":"ContainerDied","Data":"d325df2cccc66f196773df8abd480c9f0b97bb9a210a13e50d2c735ad3e3fec3"} Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.385149 4625 scope.go:117] "RemoveContainer" containerID="eb3a3a5ef4dfbd32d4fbe319f8d66ee92abe31f4c57c1969002e6206b2172d39" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.394527 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "88761c55-9427-4195-a3c9-c54a5b8554c8" (UID: "88761c55-9427-4195-a3c9-c54a5b8554c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.416075 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.416200 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.416220 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.469262 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "88761c55-9427-4195-a3c9-c54a5b8554c8" (UID: "88761c55-9427-4195-a3c9-c54a5b8554c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.518992 4625 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88761c55-9427-4195-a3c9-c54a5b8554c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.732997 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-n2q97"] Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.741738 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-n2q97"] Dec 02 14:06:22 crc kubenswrapper[4625]: I1202 14:06:22.883600 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88761c55-9427-4195-a3c9-c54a5b8554c8" path="/var/lib/kubelet/pods/88761c55-9427-4195-a3c9-c54a5b8554c8/volumes" Dec 02 14:06:23 crc kubenswrapper[4625]: I1202 14:06:23.394779 4625 generic.go:334] "Generic (PLEG): container finished" podID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerID="7155cfa7de55070cff7a7874be6fc821cc88ed501a0c167b3af01a33182482be" exitCode=143 Dec 02 14:06:23 crc kubenswrapper[4625]: I1202 14:06:23.394862 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b967fc68b-flqxx" event={"ID":"6305c486-3db4-4bea-8d41-555d94ea0e5d","Type":"ContainerDied","Data":"7155cfa7de55070cff7a7874be6fc821cc88ed501a0c167b3af01a33182482be"} Dec 02 14:06:23 crc kubenswrapper[4625]: I1202 14:06:23.398981 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d2c435c-5496-4ec7-ac3f-eab4e5728204","Type":"ContainerStarted","Data":"b376691871038132e6b5342547f2fbf0ba46438dda8c89ad137aa9666199c5fe"} Dec 02 14:06:24 crc kubenswrapper[4625]: I1202 14:06:24.462930 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6f7c78dbd6-lsbdb" Dec 02 14:06:24 crc kubenswrapper[4625]: I1202 14:06:24.959889 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 02 14:06:24 crc kubenswrapper[4625]: E1202 14:06:24.961202 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88761c55-9427-4195-a3c9-c54a5b8554c8" containerName="dnsmasq-dns" Dec 02 14:06:24 crc kubenswrapper[4625]: I1202 14:06:24.961224 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="88761c55-9427-4195-a3c9-c54a5b8554c8" containerName="dnsmasq-dns" Dec 02 14:06:24 crc kubenswrapper[4625]: E1202 14:06:24.961275 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88761c55-9427-4195-a3c9-c54a5b8554c8" containerName="init" Dec 02 14:06:24 crc kubenswrapper[4625]: I1202 14:06:24.961285 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="88761c55-9427-4195-a3c9-c54a5b8554c8" containerName="init" Dec 02 14:06:24 crc kubenswrapper[4625]: I1202 14:06:24.961571 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="88761c55-9427-4195-a3c9-c54a5b8554c8" containerName="dnsmasq-dns" Dec 02 14:06:24 crc kubenswrapper[4625]: I1202 14:06:24.962627 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:06:24 crc kubenswrapper[4625]: I1202 14:06:24.966433 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 02 14:06:24 crc kubenswrapper[4625]: I1202 14:06:24.966738 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qk9qd" Dec 02 14:06:24 crc kubenswrapper[4625]: I1202 14:06:24.970180 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.010030 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.097878 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.098437 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsqln\" (UniqueName: \"kubernetes.io/projected/6c5341a2-e3e8-44ed-94fa-24271e445650-kube-api-access-lsqln\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.098523 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.100952 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config-secret\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.204787 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsqln\" (UniqueName: \"kubernetes.io/projected/6c5341a2-e3e8-44ed-94fa-24271e445650-kube-api-access-lsqln\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.204852 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.204955 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config-secret\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.205038 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.206133 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.243117 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config-secret\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.249992 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsqln\" (UniqueName: \"kubernetes.io/projected/6c5341a2-e3e8-44ed-94fa-24271e445650-kube-api-access-lsqln\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.253662 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.311652 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.323929 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.349732 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.371726 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.377921 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.391802 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.514116 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e342617-f071-4967-a02d-38534c2c7c11-openstack-config-secret\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.514241 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e342617-f071-4967-a02d-38534c2c7c11-openstack-config\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.514270 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e342617-f071-4967-a02d-38534c2c7c11-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.514327 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmwhd\" (UniqueName: \"kubernetes.io/projected/7e342617-f071-4967-a02d-38534c2c7c11-kube-api-access-qmwhd\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.619226 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e342617-f071-4967-a02d-38534c2c7c11-openstack-config\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.619324 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e342617-f071-4967-a02d-38534c2c7c11-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.619372 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmwhd\" (UniqueName: \"kubernetes.io/projected/7e342617-f071-4967-a02d-38534c2c7c11-kube-api-access-qmwhd\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.619530 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e342617-f071-4967-a02d-38534c2c7c11-openstack-config-secret\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.622920 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e342617-f071-4967-a02d-38534c2c7c11-openstack-config\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.656167 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e342617-f071-4967-a02d-38534c2c7c11-openstack-config-secret\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.661547 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e342617-f071-4967-a02d-38534c2c7c11-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.671864 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmwhd\" (UniqueName: \"kubernetes.io/projected/7e342617-f071-4967-a02d-38534c2c7c11-kube-api-access-qmwhd\") pod \"openstackclient\" (UID: \"7e342617-f071-4967-a02d-38534c2c7c11\") " pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.730643 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.751378 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 02 14:06:25 crc kubenswrapper[4625]: I1202 14:06:25.812418 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:06:26 crc kubenswrapper[4625]: I1202 14:06:26.454921 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3f730bcc-709b-4d75-9d55-579a852a5855" containerName="cinder-scheduler" containerID="cri-o://5e83dc7b7fdf96cd7bac9f6787e4ccf5f911ecfec30c3db5c9ed177836a6e6dc" gracePeriod=30 Dec 02 14:06:26 crc kubenswrapper[4625]: I1202 14:06:26.455566 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3f730bcc-709b-4d75-9d55-579a852a5855" containerName="probe" containerID="cri-o://28d747eb014b8634711c10ec401da7a4968d468a1066623e6a3162f685292536" gracePeriod=30 Dec 02 14:06:26 crc kubenswrapper[4625]: I1202 14:06:26.949866 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:55318->10.217.0.157:9311: read: connection reset by peer" Dec 02 14:06:26 crc kubenswrapper[4625]: I1202 14:06:26.949913 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:55324->10.217.0.157:9311: read: connection reset by peer" Dec 02 14:06:27 crc kubenswrapper[4625]: I1202 14:06:27.498503 4625 generic.go:334] "Generic (PLEG): container finished" podID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerID="1a3cf40229a149b6f96d45542d9e2d953ac5c335e7d59fd18c322e334129e014" exitCode=0 Dec 02 14:06:27 crc kubenswrapper[4625]: I1202 14:06:27.498963 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b967fc68b-flqxx" event={"ID":"6305c486-3db4-4bea-8d41-555d94ea0e5d","Type":"ContainerDied","Data":"1a3cf40229a149b6f96d45542d9e2d953ac5c335e7d59fd18c322e334129e014"} Dec 02 14:06:27 crc kubenswrapper[4625]: I1202 14:06:27.679191 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Dec 02 14:06:27 crc kubenswrapper[4625]: I1202 14:06:27.679217 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b967fc68b-flqxx" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Dec 02 14:06:28 crc kubenswrapper[4625]: I1202 14:06:28.515394 4625 generic.go:334] "Generic (PLEG): container finished" podID="3f730bcc-709b-4d75-9d55-579a852a5855" containerID="28d747eb014b8634711c10ec401da7a4968d468a1066623e6a3162f685292536" exitCode=0 Dec 02 14:06:28 crc kubenswrapper[4625]: I1202 14:06:28.515445 4625 generic.go:334] "Generic (PLEG): container finished" podID="3f730bcc-709b-4d75-9d55-579a852a5855" containerID="5e83dc7b7fdf96cd7bac9f6787e4ccf5f911ecfec30c3db5c9ed177836a6e6dc" exitCode=0 Dec 02 14:06:28 crc kubenswrapper[4625]: I1202 14:06:28.515467 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f730bcc-709b-4d75-9d55-579a852a5855","Type":"ContainerDied","Data":"28d747eb014b8634711c10ec401da7a4968d468a1066623e6a3162f685292536"} Dec 02 14:06:28 crc kubenswrapper[4625]: I1202 14:06:28.515501 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f730bcc-709b-4d75-9d55-579a852a5855","Type":"ContainerDied","Data":"5e83dc7b7fdf96cd7bac9f6787e4ccf5f911ecfec30c3db5c9ed177836a6e6dc"} Dec 02 14:06:31 crc kubenswrapper[4625]: I1202 14:06:31.287245 4625 scope.go:117] "RemoveContainer" containerID="4e6d2d5de34710e37838f70a6d577ba71ca58e87b01af3cb31617e688012e67a" Dec 02 14:06:32 crc kubenswrapper[4625]: E1202 14:06:32.425221 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 02 14:06:32 crc kubenswrapper[4625]: E1202 14:06:32.426921 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgpgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(abbd3215-4ced-473b-84a7-1f859e2782b2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 02 14:06:32 crc kubenswrapper[4625]: E1202 14:06:32.428555 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="abbd3215-4ced-473b-84a7-1f859e2782b2" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.610846 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f730bcc-709b-4d75-9d55-579a852a5855","Type":"ContainerDied","Data":"f8dafcfb10430d9c2e38a0ecfc6974913ad341463567a57f8298658dd9668458"} Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.611232 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8dafcfb10430d9c2e38a0ecfc6974913ad341463567a57f8298658dd9668458" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.626570 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b967fc68b-flqxx" event={"ID":"6305c486-3db4-4bea-8d41-555d94ea0e5d","Type":"ContainerDied","Data":"6898ac25fad160a6b3b8744112208525266c0f9694d45603a6cbaf944a4996e1"} Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.626679 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6898ac25fad160a6b3b8744112208525266c0f9694d45603a6cbaf944a4996e1" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.628406 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="abbd3215-4ced-473b-84a7-1f859e2782b2" containerName="ceilometer-notification-agent" containerID="cri-o://c0b85fc85b6af33a966bd899d504149093149ae35bad48a55cf819efdccadc6f" gracePeriod=30 Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.640582 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.682681 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.718921 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-combined-ca-bundle\") pod \"6305c486-3db4-4bea-8d41-555d94ea0e5d\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.719514 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data\") pod \"6305c486-3db4-4bea-8d41-555d94ea0e5d\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.719603 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data-custom\") pod \"6305c486-3db4-4bea-8d41-555d94ea0e5d\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.719636 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvk7j\" (UniqueName: \"kubernetes.io/projected/6305c486-3db4-4bea-8d41-555d94ea0e5d-kube-api-access-tvk7j\") pod \"6305c486-3db4-4bea-8d41-555d94ea0e5d\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.719796 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6305c486-3db4-4bea-8d41-555d94ea0e5d-logs\") pod \"6305c486-3db4-4bea-8d41-555d94ea0e5d\" (UID: \"6305c486-3db4-4bea-8d41-555d94ea0e5d\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.726128 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6305c486-3db4-4bea-8d41-555d94ea0e5d-logs" (OuterVolumeSpecName: "logs") pod "6305c486-3db4-4bea-8d41-555d94ea0e5d" (UID: "6305c486-3db4-4bea-8d41-555d94ea0e5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.772625 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6305c486-3db4-4bea-8d41-555d94ea0e5d-kube-api-access-tvk7j" (OuterVolumeSpecName: "kube-api-access-tvk7j") pod "6305c486-3db4-4bea-8d41-555d94ea0e5d" (UID: "6305c486-3db4-4bea-8d41-555d94ea0e5d"). InnerVolumeSpecName "kube-api-access-tvk7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.778260 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6305c486-3db4-4bea-8d41-555d94ea0e5d" (UID: "6305c486-3db4-4bea-8d41-555d94ea0e5d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.821715 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data\") pod \"3f730bcc-709b-4d75-9d55-579a852a5855\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.821805 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f730bcc-709b-4d75-9d55-579a852a5855-etc-machine-id\") pod \"3f730bcc-709b-4d75-9d55-579a852a5855\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.821885 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-combined-ca-bundle\") pod \"3f730bcc-709b-4d75-9d55-579a852a5855\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.822011 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-scripts\") pod \"3f730bcc-709b-4d75-9d55-579a852a5855\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.822042 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data-custom\") pod \"3f730bcc-709b-4d75-9d55-579a852a5855\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.822171 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrlsv\" (UniqueName: \"kubernetes.io/projected/3f730bcc-709b-4d75-9d55-579a852a5855-kube-api-access-rrlsv\") pod \"3f730bcc-709b-4d75-9d55-579a852a5855\" (UID: \"3f730bcc-709b-4d75-9d55-579a852a5855\") " Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.823876 4625 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.823900 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvk7j\" (UniqueName: \"kubernetes.io/projected/6305c486-3db4-4bea-8d41-555d94ea0e5d-kube-api-access-tvk7j\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.823912 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6305c486-3db4-4bea-8d41-555d94ea0e5d-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.833577 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f730bcc-709b-4d75-9d55-579a852a5855-kube-api-access-rrlsv" (OuterVolumeSpecName: "kube-api-access-rrlsv") pod "3f730bcc-709b-4d75-9d55-579a852a5855" (UID: "3f730bcc-709b-4d75-9d55-579a852a5855"). InnerVolumeSpecName "kube-api-access-rrlsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.836406 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f730bcc-709b-4d75-9d55-579a852a5855-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3f730bcc-709b-4d75-9d55-579a852a5855" (UID: "3f730bcc-709b-4d75-9d55-579a852a5855"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.838917 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6305c486-3db4-4bea-8d41-555d94ea0e5d" (UID: "6305c486-3db4-4bea-8d41-555d94ea0e5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.898485 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f730bcc-709b-4d75-9d55-579a852a5855" (UID: "3f730bcc-709b-4d75-9d55-579a852a5855"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.898636 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-scripts" (OuterVolumeSpecName: "scripts") pod "3f730bcc-709b-4d75-9d55-579a852a5855" (UID: "3f730bcc-709b-4d75-9d55-579a852a5855"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.930698 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.930728 4625 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.930737 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.930751 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrlsv\" (UniqueName: \"kubernetes.io/projected/3f730bcc-709b-4d75-9d55-579a852a5855-kube-api-access-rrlsv\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.930761 4625 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f730bcc-709b-4d75-9d55-579a852a5855-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:32 crc kubenswrapper[4625]: I1202 14:06:32.936265 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data" (OuterVolumeSpecName: "config-data") pod "6305c486-3db4-4bea-8d41-555d94ea0e5d" (UID: "6305c486-3db4-4bea-8d41-555d94ea0e5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.035051 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6305c486-3db4-4bea-8d41-555d94ea0e5d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.064574 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f730bcc-709b-4d75-9d55-579a852a5855" (UID: "3f730bcc-709b-4d75-9d55-579a852a5855"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.126827 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data" (OuterVolumeSpecName: "config-data") pod "3f730bcc-709b-4d75-9d55-579a852a5855" (UID: "3f730bcc-709b-4d75-9d55-579a852a5855"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.136939 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.136977 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f730bcc-709b-4d75-9d55-579a852a5855-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.176549 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 02 14:06:33 crc kubenswrapper[4625]: W1202 14:06:33.193893 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e342617_f071_4967_a02d_38534c2c7c11.slice/crio-77955e20ae8c759b69930d33cd7218211325f58a2b33068e0f705da551b0c659 WatchSource:0}: Error finding container 77955e20ae8c759b69930d33cd7218211325f58a2b33068e0f705da551b0c659: Status 404 returned error can't find the container with id 77955e20ae8c759b69930d33cd7218211325f58a2b33068e0f705da551b0c659 Dec 02 14:06:33 crc kubenswrapper[4625]: E1202 14:06:33.296079 4625 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 02 14:06:33 crc kubenswrapper[4625]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_6c5341a2-e3e8-44ed-94fa-24271e445650_0(59fcf493fb99d17978c8b8607d2451f529709a5862b9fc358bf6c0cc3c1b60f1): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"59fcf493fb99d17978c8b8607d2451f529709a5862b9fc358bf6c0cc3c1b60f1" Netns:"/var/run/netns/b2bc92a2-9144-4968-9dc4-5d4bfba96787" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=59fcf493fb99d17978c8b8607d2451f529709a5862b9fc358bf6c0cc3c1b60f1;K8S_POD_UID=6c5341a2-e3e8-44ed-94fa-24271e445650" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/6c5341a2-e3e8-44ed-94fa-24271e445650]: expected pod UID "6c5341a2-e3e8-44ed-94fa-24271e445650" but got "7e342617-f071-4967-a02d-38534c2c7c11" from Kube API Dec 02 14:06:33 crc kubenswrapper[4625]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 02 14:06:33 crc kubenswrapper[4625]: > Dec 02 14:06:33 crc kubenswrapper[4625]: E1202 14:06:33.296496 4625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 02 14:06:33 crc kubenswrapper[4625]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_6c5341a2-e3e8-44ed-94fa-24271e445650_0(59fcf493fb99d17978c8b8607d2451f529709a5862b9fc358bf6c0cc3c1b60f1): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"59fcf493fb99d17978c8b8607d2451f529709a5862b9fc358bf6c0cc3c1b60f1" Netns:"/var/run/netns/b2bc92a2-9144-4968-9dc4-5d4bfba96787" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=59fcf493fb99d17978c8b8607d2451f529709a5862b9fc358bf6c0cc3c1b60f1;K8S_POD_UID=6c5341a2-e3e8-44ed-94fa-24271e445650" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/6c5341a2-e3e8-44ed-94fa-24271e445650]: expected pod UID "6c5341a2-e3e8-44ed-94fa-24271e445650" but got "7e342617-f071-4967-a02d-38534c2c7c11" from Kube API Dec 02 14:06:33 crc kubenswrapper[4625]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 02 14:06:33 crc kubenswrapper[4625]: > pod="openstack/openstackclient" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.648324 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7e342617-f071-4967-a02d-38534c2c7c11","Type":"ContainerStarted","Data":"77955e20ae8c759b69930d33cd7218211325f58a2b33068e0f705da551b0c659"} Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.648422 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.648475 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b967fc68b-flqxx" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.648495 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.664837 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.682898 4625 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6c5341a2-e3e8-44ed-94fa-24271e445650" podUID="7e342617-f071-4967-a02d-38534c2c7c11" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.721385 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b967fc68b-flqxx"] Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.733268 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b967fc68b-flqxx"] Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.742279 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.757911 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.786703 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:06:33 crc kubenswrapper[4625]: E1202 14:06:33.787630 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.787710 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" Dec 02 14:06:33 crc kubenswrapper[4625]: E1202 14:06:33.787850 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f730bcc-709b-4d75-9d55-579a852a5855" containerName="probe" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.787909 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f730bcc-709b-4d75-9d55-579a852a5855" containerName="probe" Dec 02 14:06:33 crc kubenswrapper[4625]: E1202 14:06:33.787978 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.788043 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api" Dec 02 14:06:33 crc kubenswrapper[4625]: E1202 14:06:33.788106 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f730bcc-709b-4d75-9d55-579a852a5855" containerName="cinder-scheduler" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.788191 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f730bcc-709b-4d75-9d55-579a852a5855" containerName="cinder-scheduler" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.788456 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f730bcc-709b-4d75-9d55-579a852a5855" containerName="cinder-scheduler" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.788534 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f730bcc-709b-4d75-9d55-579a852a5855" containerName="probe" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.788593 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api-log" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.788676 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" containerName="barbican-api" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.789926 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.793949 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.811336 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.869100 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsqln\" (UniqueName: \"kubernetes.io/projected/6c5341a2-e3e8-44ed-94fa-24271e445650-kube-api-access-lsqln\") pod \"6c5341a2-e3e8-44ed-94fa-24271e445650\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.869154 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-combined-ca-bundle\") pod \"6c5341a2-e3e8-44ed-94fa-24271e445650\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.870639 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config-secret\") pod \"6c5341a2-e3e8-44ed-94fa-24271e445650\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.870674 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config\") pod \"6c5341a2-e3e8-44ed-94fa-24271e445650\" (UID: \"6c5341a2-e3e8-44ed-94fa-24271e445650\") " Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.871169 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.871218 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a4032f9-0bbb-4491-9f59-8b6006133dd6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.871240 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.871357 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n854\" (UniqueName: \"kubernetes.io/projected/9a4032f9-0bbb-4491-9f59-8b6006133dd6-kube-api-access-2n854\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.871431 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-config-data\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.871455 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-scripts\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.872179 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6c5341a2-e3e8-44ed-94fa-24271e445650" (UID: "6c5341a2-e3e8-44ed-94fa-24271e445650"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.877361 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c5341a2-e3e8-44ed-94fa-24271e445650" (UID: "6c5341a2-e3e8-44ed-94fa-24271e445650"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.886893 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6c5341a2-e3e8-44ed-94fa-24271e445650" (UID: "6c5341a2-e3e8-44ed-94fa-24271e445650"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.888215 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5341a2-e3e8-44ed-94fa-24271e445650-kube-api-access-lsqln" (OuterVolumeSpecName: "kube-api-access-lsqln") pod "6c5341a2-e3e8-44ed-94fa-24271e445650" (UID: "6c5341a2-e3e8-44ed-94fa-24271e445650"). InnerVolumeSpecName "kube-api-access-lsqln". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.974206 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-config-data\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.974287 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-scripts\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.974404 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.974440 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a4032f9-0bbb-4491-9f59-8b6006133dd6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.974468 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.974572 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n854\" (UniqueName: \"kubernetes.io/projected/9a4032f9-0bbb-4491-9f59-8b6006133dd6-kube-api-access-2n854\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.974678 4625 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.974699 4625 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6c5341a2-e3e8-44ed-94fa-24271e445650-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.974712 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsqln\" (UniqueName: \"kubernetes.io/projected/6c5341a2-e3e8-44ed-94fa-24271e445650-kube-api-access-lsqln\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.974726 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5341a2-e3e8-44ed-94fa-24271e445650-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.975047 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a4032f9-0bbb-4491-9f59-8b6006133dd6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.981612 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.982472 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.982892 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-scripts\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.985767 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a4032f9-0bbb-4491-9f59-8b6006133dd6-config-data\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:33 crc kubenswrapper[4625]: I1202 14:06:33.997657 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n854\" (UniqueName: \"kubernetes.io/projected/9a4032f9-0bbb-4491-9f59-8b6006133dd6-kube-api-access-2n854\") pod \"cinder-scheduler-0\" (UID: \"9a4032f9-0bbb-4491-9f59-8b6006133dd6\") " pod="openstack/cinder-scheduler-0" Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.105660 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.665651 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3d2c435c-5496-4ec7-ac3f-eab4e5728204","Type":"ContainerStarted","Data":"a92fe6687293640db0830b81732ea6b0a08335fffc0f6b80deca5131bec0e2ec"} Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.666397 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.673735 4625 generic.go:334] "Generic (PLEG): container finished" podID="abbd3215-4ced-473b-84a7-1f859e2782b2" containerID="c0b85fc85b6af33a966bd899d504149093149ae35bad48a55cf819efdccadc6f" exitCode=0 Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.673821 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.673874 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd3215-4ced-473b-84a7-1f859e2782b2","Type":"ContainerDied","Data":"c0b85fc85b6af33a966bd899d504149093149ae35bad48a55cf819efdccadc6f"} Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.706192 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=15.70616642 podStartE2EDuration="15.70616642s" podCreationTimestamp="2025-12-02 14:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:34.706103998 +0000 UTC m=+1350.668281073" watchObservedRunningTime="2025-12-02 14:06:34.70616642 +0000 UTC m=+1350.668343495" Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.717235 4625 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6c5341a2-e3e8-44ed-94fa-24271e445650" podUID="7e342617-f071-4967-a02d-38534c2c7c11" Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.773606 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 02 14:06:34 crc kubenswrapper[4625]: W1202 14:06:34.807534 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a4032f9_0bbb_4491_9f59_8b6006133dd6.slice/crio-87fd38efc5d8efd1b7ad1e13b0f3c39c55b7933071a18431256b85d4dce41328 WatchSource:0}: Error finding container 87fd38efc5d8efd1b7ad1e13b0f3c39c55b7933071a18431256b85d4dce41328: Status 404 returned error can't find the container with id 87fd38efc5d8efd1b7ad1e13b0f3c39c55b7933071a18431256b85d4dce41328 Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.917996 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f730bcc-709b-4d75-9d55-579a852a5855" path="/var/lib/kubelet/pods/3f730bcc-709b-4d75-9d55-579a852a5855/volumes" Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.949846 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6305c486-3db4-4bea-8d41-555d94ea0e5d" path="/var/lib/kubelet/pods/6305c486-3db4-4bea-8d41-555d94ea0e5d/volumes" Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.957350 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5341a2-e3e8-44ed-94fa-24271e445650" path="/var/lib/kubelet/pods/6c5341a2-e3e8-44ed-94fa-24271e445650/volumes" Dec 02 14:06:34 crc kubenswrapper[4625]: I1202 14:06:34.991744 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.106983 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgpgh\" (UniqueName: \"kubernetes.io/projected/abbd3215-4ced-473b-84a7-1f859e2782b2-kube-api-access-rgpgh\") pod \"abbd3215-4ced-473b-84a7-1f859e2782b2\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.107080 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-log-httpd\") pod \"abbd3215-4ced-473b-84a7-1f859e2782b2\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.107273 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-run-httpd\") pod \"abbd3215-4ced-473b-84a7-1f859e2782b2\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.107305 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-combined-ca-bundle\") pod \"abbd3215-4ced-473b-84a7-1f859e2782b2\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.107441 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-config-data\") pod \"abbd3215-4ced-473b-84a7-1f859e2782b2\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.107491 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-sg-core-conf-yaml\") pod \"abbd3215-4ced-473b-84a7-1f859e2782b2\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.107870 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-scripts\") pod \"abbd3215-4ced-473b-84a7-1f859e2782b2\" (UID: \"abbd3215-4ced-473b-84a7-1f859e2782b2\") " Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.108257 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "abbd3215-4ced-473b-84a7-1f859e2782b2" (UID: "abbd3215-4ced-473b-84a7-1f859e2782b2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.109280 4625 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.109781 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "abbd3215-4ced-473b-84a7-1f859e2782b2" (UID: "abbd3215-4ced-473b-84a7-1f859e2782b2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.115485 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "abbd3215-4ced-473b-84a7-1f859e2782b2" (UID: "abbd3215-4ced-473b-84a7-1f859e2782b2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.116127 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-scripts" (OuterVolumeSpecName: "scripts") pod "abbd3215-4ced-473b-84a7-1f859e2782b2" (UID: "abbd3215-4ced-473b-84a7-1f859e2782b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.119931 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abbd3215-4ced-473b-84a7-1f859e2782b2-kube-api-access-rgpgh" (OuterVolumeSpecName: "kube-api-access-rgpgh") pod "abbd3215-4ced-473b-84a7-1f859e2782b2" (UID: "abbd3215-4ced-473b-84a7-1f859e2782b2"). InnerVolumeSpecName "kube-api-access-rgpgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.155597 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-config-data" (OuterVolumeSpecName: "config-data") pod "abbd3215-4ced-473b-84a7-1f859e2782b2" (UID: "abbd3215-4ced-473b-84a7-1f859e2782b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.183621 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abbd3215-4ced-473b-84a7-1f859e2782b2" (UID: "abbd3215-4ced-473b-84a7-1f859e2782b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.211938 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgpgh\" (UniqueName: \"kubernetes.io/projected/abbd3215-4ced-473b-84a7-1f859e2782b2-kube-api-access-rgpgh\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.212327 4625 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd3215-4ced-473b-84a7-1f859e2782b2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.212455 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.212543 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.212602 4625 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.212658 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbd3215-4ced-473b-84a7-1f859e2782b2-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.241842 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6fb4775b59-xb9rg"] Dec 02 14:06:35 crc kubenswrapper[4625]: E1202 14:06:35.242411 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbd3215-4ced-473b-84a7-1f859e2782b2" containerName="ceilometer-notification-agent" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.242430 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbd3215-4ced-473b-84a7-1f859e2782b2" containerName="ceilometer-notification-agent" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.242700 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbd3215-4ced-473b-84a7-1f859e2782b2" containerName="ceilometer-notification-agent" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.244015 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.251884 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.252170 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.252377 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.302420 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6fb4775b59-xb9rg"] Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.416179 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-config-data\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.416303 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-public-tls-certs\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.416483 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afdaf455-8ee8-42c2-8086-305834a075a5-log-httpd\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.416593 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/afdaf455-8ee8-42c2-8086-305834a075a5-etc-swift\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.416719 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnfrb\" (UniqueName: \"kubernetes.io/projected/afdaf455-8ee8-42c2-8086-305834a075a5-kube-api-access-hnfrb\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.416818 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afdaf455-8ee8-42c2-8086-305834a075a5-run-httpd\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.416885 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-internal-tls-certs\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.416923 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-combined-ca-bundle\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.519736 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-internal-tls-certs\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.519816 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-combined-ca-bundle\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.519887 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-config-data\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.519925 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-public-tls-certs\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.519956 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afdaf455-8ee8-42c2-8086-305834a075a5-log-httpd\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.519985 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/afdaf455-8ee8-42c2-8086-305834a075a5-etc-swift\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.520041 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnfrb\" (UniqueName: \"kubernetes.io/projected/afdaf455-8ee8-42c2-8086-305834a075a5-kube-api-access-hnfrb\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.520081 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afdaf455-8ee8-42c2-8086-305834a075a5-run-httpd\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.522883 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afdaf455-8ee8-42c2-8086-305834a075a5-run-httpd\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.523119 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afdaf455-8ee8-42c2-8086-305834a075a5-log-httpd\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.540666 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-combined-ca-bundle\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.548291 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/afdaf455-8ee8-42c2-8086-305834a075a5-etc-swift\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.551033 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnfrb\" (UniqueName: \"kubernetes.io/projected/afdaf455-8ee8-42c2-8086-305834a075a5-kube-api-access-hnfrb\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.551861 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-config-data\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.554435 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-public-tls-certs\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.555573 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afdaf455-8ee8-42c2-8086-305834a075a5-internal-tls-certs\") pod \"swift-proxy-6fb4775b59-xb9rg\" (UID: \"afdaf455-8ee8-42c2-8086-305834a075a5\") " pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.597262 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.700641 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd3215-4ced-473b-84a7-1f859e2782b2","Type":"ContainerDied","Data":"a297e061e447bbbd0c480fe8db00e9219037278e3ff0f383268234c6f71bb7d3"} Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.700733 4625 scope.go:117] "RemoveContainer" containerID="c0b85fc85b6af33a966bd899d504149093149ae35bad48a55cf819efdccadc6f" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.700969 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.729724 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9a4032f9-0bbb-4491-9f59-8b6006133dd6","Type":"ContainerStarted","Data":"87fd38efc5d8efd1b7ad1e13b0f3c39c55b7933071a18431256b85d4dce41328"} Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.796374 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.837195 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.854668 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.857205 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.870426 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.875436 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.909472 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.940710 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksnzm\" (UniqueName: \"kubernetes.io/projected/ef5cc506-cd7a-493f-a749-c39160dfe5bd-kube-api-access-ksnzm\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.940761 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.940807 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.940910 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.941170 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.941214 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-config-data\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:35 crc kubenswrapper[4625]: I1202 14:06:35.943360 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-scripts\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.045001 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.045692 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-config-data\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.045973 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-scripts\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.046058 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksnzm\" (UniqueName: \"kubernetes.io/projected/ef5cc506-cd7a-493f-a749-c39160dfe5bd-kube-api-access-ksnzm\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.046092 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.046186 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.046287 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.046949 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.047422 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.055816 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.055817 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-scripts\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.060551 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-config-data\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.073667 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.076750 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksnzm\" (UniqueName: \"kubernetes.io/projected/ef5cc506-cd7a-493f-a749-c39160dfe5bd-kube-api-access-ksnzm\") pod \"ceilometer-0\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.206135 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.487226 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6fb4775b59-xb9rg"] Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.751876 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6fb4775b59-xb9rg" event={"ID":"afdaf455-8ee8-42c2-8086-305834a075a5","Type":"ContainerStarted","Data":"fee3f8a036c9909ad3eaf9511a312c0929116baf531c32dbe2b2a3ba16d03b42"} Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.787091 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9a4032f9-0bbb-4491-9f59-8b6006133dd6","Type":"ContainerStarted","Data":"dc2fe5529ed2bdab32b3a680551ade690aa03dc64a7664e38a2f0dadecbd3e2c"} Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.799180 4625 generic.go:334] "Generic (PLEG): container finished" podID="92339196-3d33-4b76-9ba2-81e1a8373e84" containerID="ecf1871be89bb7259b3396f1b0d15bf2940dc1ca653cceb4173acdb58bbada5d" exitCode=137 Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.799262 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc4db5bfb-zbs4l" event={"ID":"92339196-3d33-4b76-9ba2-81e1a8373e84","Type":"ContainerDied","Data":"ecf1871be89bb7259b3396f1b0d15bf2940dc1ca653cceb4173acdb58bbada5d"} Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.809882 4625 generic.go:334] "Generic (PLEG): container finished" podID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerID="f7d7aff050b1cd68f760459d9ee8066bf44a2756b77213a691265525e661240d" exitCode=137 Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.809946 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c94877878-jvhxv" event={"ID":"04b6d9a8-9eed-441e-a627-83774df65ed9","Type":"ContainerDied","Data":"f7d7aff050b1cd68f760459d9ee8066bf44a2756b77213a691265525e661240d"} Dec 02 14:06:36 crc kubenswrapper[4625]: I1202 14:06:36.893909 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abbd3215-4ced-473b-84a7-1f859e2782b2" path="/var/lib/kubelet/pods/abbd3215-4ced-473b-84a7-1f859e2782b2/volumes" Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.028794 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.823729 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9a4032f9-0bbb-4491-9f59-8b6006133dd6","Type":"ContainerStarted","Data":"49c9d0a972128739d6a6631f1575d77fb1f7d75558b811ee2e36db88ee5fea56"} Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.828828 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef5cc506-cd7a-493f-a749-c39160dfe5bd","Type":"ContainerStarted","Data":"c89bc10352aba387442483c0b537429b7030255f0324f0c0b63fd7bc19fb7f97"} Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.840184 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc4db5bfb-zbs4l" event={"ID":"92339196-3d33-4b76-9ba2-81e1a8373e84","Type":"ContainerStarted","Data":"d39eb58cbd457e3197c1b069033007f6412dcad0cabe4997716a90bbba3af4b6"} Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.865203 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.865177349 podStartE2EDuration="4.865177349s" podCreationTimestamp="2025-12-02 14:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:37.847864743 +0000 UTC m=+1353.810041818" watchObservedRunningTime="2025-12-02 14:06:37.865177349 +0000 UTC m=+1353.827354424" Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.867370 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c94877878-jvhxv" event={"ID":"04b6d9a8-9eed-441e-a627-83774df65ed9","Type":"ContainerStarted","Data":"0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47"} Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.874839 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6fb4775b59-xb9rg" event={"ID":"afdaf455-8ee8-42c2-8086-305834a075a5","Type":"ContainerStarted","Data":"3f99cbd77cdc9b376711fa637d7995b494ae4e186a61bac5461ba5290dac2a92"} Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.874901 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6fb4775b59-xb9rg" event={"ID":"afdaf455-8ee8-42c2-8086-305834a075a5","Type":"ContainerStarted","Data":"60cebd663e73d0612e64ea75000ba4afc5b0afbc4930c36b4d64941502ece981"} Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.875110 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.875131 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:37 crc kubenswrapper[4625]: I1202 14:06:37.991426 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6fb4775b59-xb9rg" podStartSLOduration=2.9913985370000002 podStartE2EDuration="2.991398537s" podCreationTimestamp="2025-12-02 14:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:06:37.913697159 +0000 UTC m=+1353.875874234" watchObservedRunningTime="2025-12-02 14:06:37.991398537 +0000 UTC m=+1353.953575602" Dec 02 14:06:38 crc kubenswrapper[4625]: I1202 14:06:38.914540 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef5cc506-cd7a-493f-a749-c39160dfe5bd","Type":"ContainerStarted","Data":"dd4ad26cf876909ee190fd42b43eb5f98091246439ef5bdb949204d88965cb80"} Dec 02 14:06:39 crc kubenswrapper[4625]: I1202 14:06:39.107091 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 02 14:06:40 crc kubenswrapper[4625]: I1202 14:06:40.011288 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef5cc506-cd7a-493f-a749-c39160dfe5bd","Type":"ContainerStarted","Data":"c3f6070b551ac1178afe089b224b75519f00bd2d6100a0ac7681cd48723ced8c"} Dec 02 14:06:41 crc kubenswrapper[4625]: I1202 14:06:41.024147 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef5cc506-cd7a-493f-a749-c39160dfe5bd","Type":"ContainerStarted","Data":"c6039b05e41ef7d4ae61f2fb41e8fe676f5a65f8a9d572642295581b230194b1"} Dec 02 14:06:41 crc kubenswrapper[4625]: I1202 14:06:41.251789 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:06:42 crc kubenswrapper[4625]: I1202 14:06:42.901178 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jgkc9"] Dec 02 14:06:42 crc kubenswrapper[4625]: I1202 14:06:42.903185 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jgkc9" Dec 02 14:06:42 crc kubenswrapper[4625]: I1202 14:06:42.942050 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jgkc9"] Dec 02 14:06:42 crc kubenswrapper[4625]: I1202 14:06:42.986489 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt8mm\" (UniqueName: \"kubernetes.io/projected/ce7acbd1-38a2-4fae-8c70-8d562c580274-kube-api-access-xt8mm\") pod \"nova-api-db-create-jgkc9\" (UID: \"ce7acbd1-38a2-4fae-8c70-8d562c580274\") " pod="openstack/nova-api-db-create-jgkc9" Dec 02 14:06:42 crc kubenswrapper[4625]: I1202 14:06:42.986586 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce7acbd1-38a2-4fae-8c70-8d562c580274-operator-scripts\") pod \"nova-api-db-create-jgkc9\" (UID: \"ce7acbd1-38a2-4fae-8c70-8d562c580274\") " pod="openstack/nova-api-db-create-jgkc9" Dec 02 14:06:42 crc kubenswrapper[4625]: I1202 14:06:42.993535 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lf6rg"] Dec 02 14:06:42 crc kubenswrapper[4625]: I1202 14:06:42.995194 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lf6rg" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.006876 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lf6rg"] Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.092831 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt8mm\" (UniqueName: \"kubernetes.io/projected/ce7acbd1-38a2-4fae-8c70-8d562c580274-kube-api-access-xt8mm\") pod \"nova-api-db-create-jgkc9\" (UID: \"ce7acbd1-38a2-4fae-8c70-8d562c580274\") " pod="openstack/nova-api-db-create-jgkc9" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.092923 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce7acbd1-38a2-4fae-8c70-8d562c580274-operator-scripts\") pod \"nova-api-db-create-jgkc9\" (UID: \"ce7acbd1-38a2-4fae-8c70-8d562c580274\") " pod="openstack/nova-api-db-create-jgkc9" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.093055 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83f4615e-8b8d-420d-ad49-9eaf51763d66-operator-scripts\") pod \"nova-cell0-db-create-lf6rg\" (UID: \"83f4615e-8b8d-420d-ad49-9eaf51763d66\") " pod="openstack/nova-cell0-db-create-lf6rg" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.093082 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv557\" (UniqueName: \"kubernetes.io/projected/83f4615e-8b8d-420d-ad49-9eaf51763d66-kube-api-access-lv557\") pod \"nova-cell0-db-create-lf6rg\" (UID: \"83f4615e-8b8d-420d-ad49-9eaf51763d66\") " pod="openstack/nova-cell0-db-create-lf6rg" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.096747 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce7acbd1-38a2-4fae-8c70-8d562c580274-operator-scripts\") pod \"nova-api-db-create-jgkc9\" (UID: \"ce7acbd1-38a2-4fae-8c70-8d562c580274\") " pod="openstack/nova-api-db-create-jgkc9" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.120395 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.120351 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="ceilometer-central-agent" containerID="cri-o://dd4ad26cf876909ee190fd42b43eb5f98091246439ef5bdb949204d88965cb80" gracePeriod=30 Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.120570 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="proxy-httpd" containerID="cri-o://20253212cab0b0a5c7a4868bbb30104896188f6183cdc98db39f60492180d611" gracePeriod=30 Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.120639 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="sg-core" containerID="cri-o://c6039b05e41ef7d4ae61f2fb41e8fe676f5a65f8a9d572642295581b230194b1" gracePeriod=30 Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.120701 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="ceilometer-notification-agent" containerID="cri-o://c3f6070b551ac1178afe089b224b75519f00bd2d6100a0ac7681cd48723ced8c" gracePeriod=30 Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.135013 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bkp5f"] Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.136758 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bkp5f" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.197195 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58qbx\" (UniqueName: \"kubernetes.io/projected/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-kube-api-access-58qbx\") pod \"nova-cell1-db-create-bkp5f\" (UID: \"c3a6db25-9cc7-4109-9aac-0b1b13e9082d\") " pod="openstack/nova-cell1-db-create-bkp5f" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.197270 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-operator-scripts\") pod \"nova-cell1-db-create-bkp5f\" (UID: \"c3a6db25-9cc7-4109-9aac-0b1b13e9082d\") " pod="openstack/nova-cell1-db-create-bkp5f" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.197417 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83f4615e-8b8d-420d-ad49-9eaf51763d66-operator-scripts\") pod \"nova-cell0-db-create-lf6rg\" (UID: \"83f4615e-8b8d-420d-ad49-9eaf51763d66\") " pod="openstack/nova-cell0-db-create-lf6rg" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.197453 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv557\" (UniqueName: \"kubernetes.io/projected/83f4615e-8b8d-420d-ad49-9eaf51763d66-kube-api-access-lv557\") pod \"nova-cell0-db-create-lf6rg\" (UID: \"83f4615e-8b8d-420d-ad49-9eaf51763d66\") " pod="openstack/nova-cell0-db-create-lf6rg" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.210814 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt8mm\" (UniqueName: \"kubernetes.io/projected/ce7acbd1-38a2-4fae-8c70-8d562c580274-kube-api-access-xt8mm\") pod \"nova-api-db-create-jgkc9\" (UID: \"ce7acbd1-38a2-4fae-8c70-8d562c580274\") " pod="openstack/nova-api-db-create-jgkc9" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.213577 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83f4615e-8b8d-420d-ad49-9eaf51763d66-operator-scripts\") pod \"nova-cell0-db-create-lf6rg\" (UID: \"83f4615e-8b8d-420d-ad49-9eaf51763d66\") " pod="openstack/nova-cell0-db-create-lf6rg" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.238033 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jgkc9" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.242469 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2308-account-create-update-d8cph"] Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.244141 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2308-account-create-update-d8cph" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.258961 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.274807 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bkp5f"] Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.287458 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.923885611 podStartE2EDuration="8.287433888s" podCreationTimestamp="2025-12-02 14:06:35 +0000 UTC" firstStartedPulling="2025-12-02 14:06:37.055533842 +0000 UTC m=+1353.017710917" lastFinishedPulling="2025-12-02 14:06:41.419082119 +0000 UTC m=+1357.381259194" observedRunningTime="2025-12-02 14:06:43.192259291 +0000 UTC m=+1359.154436376" watchObservedRunningTime="2025-12-02 14:06:43.287433888 +0000 UTC m=+1359.249610963" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.288090 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2308-account-create-update-d8cph"] Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.304572 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckl6s\" (UniqueName: \"kubernetes.io/projected/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-kube-api-access-ckl6s\") pod \"nova-api-2308-account-create-update-d8cph\" (UID: \"64b5aba9-30dc-4ef5-a103-c0a1a264abb1\") " pod="openstack/nova-api-2308-account-create-update-d8cph" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.304626 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58qbx\" (UniqueName: \"kubernetes.io/projected/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-kube-api-access-58qbx\") pod \"nova-cell1-db-create-bkp5f\" (UID: \"c3a6db25-9cc7-4109-9aac-0b1b13e9082d\") " pod="openstack/nova-cell1-db-create-bkp5f" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.304661 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-operator-scripts\") pod \"nova-api-2308-account-create-update-d8cph\" (UID: \"64b5aba9-30dc-4ef5-a103-c0a1a264abb1\") " pod="openstack/nova-api-2308-account-create-update-d8cph" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.304691 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-operator-scripts\") pod \"nova-cell1-db-create-bkp5f\" (UID: \"c3a6db25-9cc7-4109-9aac-0b1b13e9082d\") " pod="openstack/nova-cell1-db-create-bkp5f" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.307462 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-operator-scripts\") pod \"nova-cell1-db-create-bkp5f\" (UID: \"c3a6db25-9cc7-4109-9aac-0b1b13e9082d\") " pod="openstack/nova-cell1-db-create-bkp5f" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.312926 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv557\" (UniqueName: \"kubernetes.io/projected/83f4615e-8b8d-420d-ad49-9eaf51763d66-kube-api-access-lv557\") pod \"nova-cell0-db-create-lf6rg\" (UID: \"83f4615e-8b8d-420d-ad49-9eaf51763d66\") " pod="openstack/nova-cell0-db-create-lf6rg" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.332390 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58qbx\" (UniqueName: \"kubernetes.io/projected/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-kube-api-access-58qbx\") pod \"nova-cell1-db-create-bkp5f\" (UID: \"c3a6db25-9cc7-4109-9aac-0b1b13e9082d\") " pod="openstack/nova-cell1-db-create-bkp5f" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.380623 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-12a0-account-create-update-2jk9s"] Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.384408 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.387912 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lf6rg" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.393885 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.407751 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckl6s\" (UniqueName: \"kubernetes.io/projected/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-kube-api-access-ckl6s\") pod \"nova-api-2308-account-create-update-d8cph\" (UID: \"64b5aba9-30dc-4ef5-a103-c0a1a264abb1\") " pod="openstack/nova-api-2308-account-create-update-d8cph" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.407843 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-operator-scripts\") pod \"nova-api-2308-account-create-update-d8cph\" (UID: \"64b5aba9-30dc-4ef5-a103-c0a1a264abb1\") " pod="openstack/nova-api-2308-account-create-update-d8cph" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.407959 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2fq5\" (UniqueName: \"kubernetes.io/projected/d3af4557-685c-47a8-b194-2eaa04edad39-kube-api-access-t2fq5\") pod \"nova-cell0-12a0-account-create-update-2jk9s\" (UID: \"d3af4557-685c-47a8-b194-2eaa04edad39\") " pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.408036 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3af4557-685c-47a8-b194-2eaa04edad39-operator-scripts\") pod \"nova-cell0-12a0-account-create-update-2jk9s\" (UID: \"d3af4557-685c-47a8-b194-2eaa04edad39\") " pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.423829 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-operator-scripts\") pod \"nova-api-2308-account-create-update-d8cph\" (UID: \"64b5aba9-30dc-4ef5-a103-c0a1a264abb1\") " pod="openstack/nova-api-2308-account-create-update-d8cph" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.490582 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckl6s\" (UniqueName: \"kubernetes.io/projected/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-kube-api-access-ckl6s\") pod \"nova-api-2308-account-create-update-d8cph\" (UID: \"64b5aba9-30dc-4ef5-a103-c0a1a264abb1\") " pod="openstack/nova-api-2308-account-create-update-d8cph" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.503114 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-12a0-account-create-update-2jk9s"] Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.510458 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3af4557-685c-47a8-b194-2eaa04edad39-operator-scripts\") pod \"nova-cell0-12a0-account-create-update-2jk9s\" (UID: \"d3af4557-685c-47a8-b194-2eaa04edad39\") " pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.510639 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2fq5\" (UniqueName: \"kubernetes.io/projected/d3af4557-685c-47a8-b194-2eaa04edad39-kube-api-access-t2fq5\") pod \"nova-cell0-12a0-account-create-update-2jk9s\" (UID: \"d3af4557-685c-47a8-b194-2eaa04edad39\") " pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.523993 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3af4557-685c-47a8-b194-2eaa04edad39-operator-scripts\") pod \"nova-cell0-12a0-account-create-update-2jk9s\" (UID: \"d3af4557-685c-47a8-b194-2eaa04edad39\") " pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.566862 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2fq5\" (UniqueName: \"kubernetes.io/projected/d3af4557-685c-47a8-b194-2eaa04edad39-kube-api-access-t2fq5\") pod \"nova-cell0-12a0-account-create-update-2jk9s\" (UID: \"d3af4557-685c-47a8-b194-2eaa04edad39\") " pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.590632 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bkp5f" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.602173 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2308-account-create-update-d8cph" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.637208 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6cf2-account-create-update-jgt9w"] Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.639147 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.652551 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.654658 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6cf2-account-create-update-jgt9w"] Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.738640 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.851748 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-operator-scripts\") pod \"nova-cell1-6cf2-account-create-update-jgt9w\" (UID: \"fec66282-38cc-4eb2-ac47-c1a5a7b377f2\") " pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.851822 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2rmb\" (UniqueName: \"kubernetes.io/projected/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-kube-api-access-w2rmb\") pod \"nova-cell1-6cf2-account-create-update-jgt9w\" (UID: \"fec66282-38cc-4eb2-ac47-c1a5a7b377f2\") " pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.954793 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-operator-scripts\") pod \"nova-cell1-6cf2-account-create-update-jgt9w\" (UID: \"fec66282-38cc-4eb2-ac47-c1a5a7b377f2\") " pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.954862 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2rmb\" (UniqueName: \"kubernetes.io/projected/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-kube-api-access-w2rmb\") pod \"nova-cell1-6cf2-account-create-update-jgt9w\" (UID: \"fec66282-38cc-4eb2-ac47-c1a5a7b377f2\") " pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.955893 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-operator-scripts\") pod \"nova-cell1-6cf2-account-create-update-jgt9w\" (UID: \"fec66282-38cc-4eb2-ac47-c1a5a7b377f2\") " pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.975793 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2rmb\" (UniqueName: \"kubernetes.io/projected/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-kube-api-access-w2rmb\") pod \"nova-cell1-6cf2-account-create-update-jgt9w\" (UID: \"fec66282-38cc-4eb2-ac47-c1a5a7b377f2\") " pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" Dec 02 14:06:43 crc kubenswrapper[4625]: I1202 14:06:43.996850 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" Dec 02 14:06:44 crc kubenswrapper[4625]: I1202 14:06:44.149088 4625 generic.go:334] "Generic (PLEG): container finished" podID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerID="c6039b05e41ef7d4ae61f2fb41e8fe676f5a65f8a9d572642295581b230194b1" exitCode=2 Dec 02 14:06:44 crc kubenswrapper[4625]: I1202 14:06:44.149165 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef5cc506-cd7a-493f-a749-c39160dfe5bd","Type":"ContainerStarted","Data":"20253212cab0b0a5c7a4868bbb30104896188f6183cdc98db39f60492180d611"} Dec 02 14:06:44 crc kubenswrapper[4625]: I1202 14:06:44.149208 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef5cc506-cd7a-493f-a749-c39160dfe5bd","Type":"ContainerDied","Data":"c6039b05e41ef7d4ae61f2fb41e8fe676f5a65f8a9d572642295581b230194b1"} Dec 02 14:06:44 crc kubenswrapper[4625]: I1202 14:06:44.657462 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 02 14:06:44 crc kubenswrapper[4625]: I1202 14:06:44.928848 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="3d2c435c-5496-4ec7-ac3f-eab4e5728204" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.163:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:45 crc kubenswrapper[4625]: I1202 14:06:45.162398 4625 generic.go:334] "Generic (PLEG): container finished" podID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerID="c3f6070b551ac1178afe089b224b75519f00bd2d6100a0ac7681cd48723ced8c" exitCode=0 Dec 02 14:06:45 crc kubenswrapper[4625]: I1202 14:06:45.162472 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef5cc506-cd7a-493f-a749-c39160dfe5bd","Type":"ContainerDied","Data":"c3f6070b551ac1178afe089b224b75519f00bd2d6100a0ac7681cd48723ced8c"} Dec 02 14:06:45 crc kubenswrapper[4625]: I1202 14:06:45.623119 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:45 crc kubenswrapper[4625]: I1202 14:06:45.644856 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6fb4775b59-xb9rg" Dec 02 14:06:45 crc kubenswrapper[4625]: I1202 14:06:45.898829 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="3d2c435c-5496-4ec7-ac3f-eab4e5728204" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.163:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:06:46 crc kubenswrapper[4625]: I1202 14:06:46.096265 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:06:46 crc kubenswrapper[4625]: I1202 14:06:46.096701 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:06:46 crc kubenswrapper[4625]: I1202 14:06:46.357047 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:06:46 crc kubenswrapper[4625]: I1202 14:06:46.359238 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:06:49 crc kubenswrapper[4625]: I1202 14:06:49.240978 4625 generic.go:334] "Generic (PLEG): container finished" podID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerID="dd4ad26cf876909ee190fd42b43eb5f98091246439ef5bdb949204d88965cb80" exitCode=0 Dec 02 14:06:49 crc kubenswrapper[4625]: I1202 14:06:49.241067 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef5cc506-cd7a-493f-a749-c39160dfe5bd","Type":"ContainerDied","Data":"dd4ad26cf876909ee190fd42b43eb5f98091246439ef5bdb949204d88965cb80"} Dec 02 14:06:49 crc kubenswrapper[4625]: I1202 14:06:49.406441 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 02 14:06:53 crc kubenswrapper[4625]: E1202 14:06:53.988243 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Dec 02 14:06:53 crc kubenswrapper[4625]: E1202 14:06:53.989794 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc9h5cch5d7h586h66fhd8h597h569h678h554h674h5b4h66h55dh85h95hbch678h668h67h54h676h5c5h579h54ch75h65h5b5hfch5c5h665h8fq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmwhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(7e342617-f071-4967-a02d-38534c2c7c11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:06:53 crc kubenswrapper[4625]: E1202 14:06:53.991268 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="7e342617-f071-4967-a02d-38534c2c7c11" Dec 02 14:06:54 crc kubenswrapper[4625]: E1202 14:06:54.353453 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="7e342617-f071-4967-a02d-38534c2c7c11" Dec 02 14:06:54 crc kubenswrapper[4625]: I1202 14:06:54.959863 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bkp5f"] Dec 02 14:06:54 crc kubenswrapper[4625]: W1202 14:06:54.969071 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3a6db25_9cc7_4109_9aac_0b1b13e9082d.slice/crio-cc30f96e82682444efa72982c75a263fce14ca27c1f6d49f0dbe05765597cb30 WatchSource:0}: Error finding container cc30f96e82682444efa72982c75a263fce14ca27c1f6d49f0dbe05765597cb30: Status 404 returned error can't find the container with id cc30f96e82682444efa72982c75a263fce14ca27c1f6d49f0dbe05765597cb30 Dec 02 14:06:55 crc kubenswrapper[4625]: I1202 14:06:55.030899 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jgkc9"] Dec 02 14:06:55 crc kubenswrapper[4625]: I1202 14:06:55.088392 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2308-account-create-update-d8cph"] Dec 02 14:06:55 crc kubenswrapper[4625]: I1202 14:06:55.297606 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lf6rg"] Dec 02 14:06:55 crc kubenswrapper[4625]: I1202 14:06:55.390947 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lf6rg" event={"ID":"83f4615e-8b8d-420d-ad49-9eaf51763d66","Type":"ContainerStarted","Data":"f697d38763915bf202697fbcb33bdcb5042c2133a1a04459c90cdc0d4ef5efdb"} Dec 02 14:06:55 crc kubenswrapper[4625]: I1202 14:06:55.393369 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bkp5f" event={"ID":"c3a6db25-9cc7-4109-9aac-0b1b13e9082d","Type":"ContainerStarted","Data":"cc30f96e82682444efa72982c75a263fce14ca27c1f6d49f0dbe05765597cb30"} Dec 02 14:06:55 crc kubenswrapper[4625]: I1202 14:06:55.395174 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2308-account-create-update-d8cph" event={"ID":"64b5aba9-30dc-4ef5-a103-c0a1a264abb1","Type":"ContainerStarted","Data":"2ac67c4e146d697984f2a1dd97ceb67fc9d207ed9763b6f4e0eb07fe80deb101"} Dec 02 14:06:55 crc kubenswrapper[4625]: I1202 14:06:55.396472 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jgkc9" event={"ID":"ce7acbd1-38a2-4fae-8c70-8d562c580274","Type":"ContainerStarted","Data":"b39850150def229fd393d30e81aafee1eca054a44b06cff9cba763d9e731559c"} Dec 02 14:06:55 crc kubenswrapper[4625]: I1202 14:06:55.454350 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-12a0-account-create-update-2jk9s"] Dec 02 14:06:55 crc kubenswrapper[4625]: W1202 14:06:55.468990 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3af4557_685c_47a8_b194_2eaa04edad39.slice/crio-956a77ff7683c7e0dd89c55325508948a8a4d988fe0629b898cc93a982ac275b WatchSource:0}: Error finding container 956a77ff7683c7e0dd89c55325508948a8a4d988fe0629b898cc93a982ac275b: Status 404 returned error can't find the container with id 956a77ff7683c7e0dd89c55325508948a8a4d988fe0629b898cc93a982ac275b Dec 02 14:06:55 crc kubenswrapper[4625]: I1202 14:06:55.505979 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6cf2-account-create-update-jgt9w"] Dec 02 14:06:55 crc kubenswrapper[4625]: W1202 14:06:55.533464 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfec66282_38cc_4eb2_ac47_c1a5a7b377f2.slice/crio-3885bccd57a422b5aa7d88419a556b2e7441b54cea79dabcb2bd04ab37f17912 WatchSource:0}: Error finding container 3885bccd57a422b5aa7d88419a556b2e7441b54cea79dabcb2bd04ab37f17912: Status 404 returned error can't find the container with id 3885bccd57a422b5aa7d88419a556b2e7441b54cea79dabcb2bd04ab37f17912 Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.101165 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.361088 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dc4db5bfb-zbs4l" podUID="92339196-3d33-4b76-9ba2-81e1a8373e84" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.423668 4625 generic.go:334] "Generic (PLEG): container finished" podID="83f4615e-8b8d-420d-ad49-9eaf51763d66" containerID="69b9cbc4abd6d33cbdbf9c20ce48a060e1bcdb66192afa7b34d2e9a6bf55e83e" exitCode=0 Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.423760 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lf6rg" event={"ID":"83f4615e-8b8d-420d-ad49-9eaf51763d66","Type":"ContainerDied","Data":"69b9cbc4abd6d33cbdbf9c20ce48a060e1bcdb66192afa7b34d2e9a6bf55e83e"} Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.431747 4625 generic.go:334] "Generic (PLEG): container finished" podID="fec66282-38cc-4eb2-ac47-c1a5a7b377f2" containerID="854726dfb2f31ff83e9ad58c6f787945c4c9eed41573830673d5b298b2a7b039" exitCode=0 Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.431866 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" event={"ID":"fec66282-38cc-4eb2-ac47-c1a5a7b377f2","Type":"ContainerDied","Data":"854726dfb2f31ff83e9ad58c6f787945c4c9eed41573830673d5b298b2a7b039"} Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.431895 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" event={"ID":"fec66282-38cc-4eb2-ac47-c1a5a7b377f2","Type":"ContainerStarted","Data":"3885bccd57a422b5aa7d88419a556b2e7441b54cea79dabcb2bd04ab37f17912"} Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.440260 4625 generic.go:334] "Generic (PLEG): container finished" podID="c3a6db25-9cc7-4109-9aac-0b1b13e9082d" containerID="ce09e6ebbaa76dcacb9dc6d407a3c9edfee8d996dc5e0d51efee099aa69a8c0e" exitCode=0 Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.440579 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bkp5f" event={"ID":"c3a6db25-9cc7-4109-9aac-0b1b13e9082d","Type":"ContainerDied","Data":"ce09e6ebbaa76dcacb9dc6d407a3c9edfee8d996dc5e0d51efee099aa69a8c0e"} Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.455532 4625 generic.go:334] "Generic (PLEG): container finished" podID="64b5aba9-30dc-4ef5-a103-c0a1a264abb1" containerID="4a83870b658fe7ad2faf87d5486746ed4cc4de27850d3f72ea65f26575fbe82f" exitCode=0 Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.455690 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2308-account-create-update-d8cph" event={"ID":"64b5aba9-30dc-4ef5-a103-c0a1a264abb1","Type":"ContainerDied","Data":"4a83870b658fe7ad2faf87d5486746ed4cc4de27850d3f72ea65f26575fbe82f"} Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.462950 4625 generic.go:334] "Generic (PLEG): container finished" podID="ce7acbd1-38a2-4fae-8c70-8d562c580274" containerID="abf88742c2064c436a981cfe86478fa9b30e36d7cf552b1324e8f7e4e1bc163d" exitCode=0 Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.463017 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jgkc9" event={"ID":"ce7acbd1-38a2-4fae-8c70-8d562c580274","Type":"ContainerDied","Data":"abf88742c2064c436a981cfe86478fa9b30e36d7cf552b1324e8f7e4e1bc163d"} Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.473066 4625 generic.go:334] "Generic (PLEG): container finished" podID="d3af4557-685c-47a8-b194-2eaa04edad39" containerID="98bd38813d1817927ebb6af84015f6a4b5b9caa81bb8f6b56728bad74021d524" exitCode=0 Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.473394 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" event={"ID":"d3af4557-685c-47a8-b194-2eaa04edad39","Type":"ContainerDied","Data":"98bd38813d1817927ebb6af84015f6a4b5b9caa81bb8f6b56728bad74021d524"} Dec 02 14:06:56 crc kubenswrapper[4625]: I1202 14:06:56.473523 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" event={"ID":"d3af4557-685c-47a8-b194-2eaa04edad39","Type":"ContainerStarted","Data":"956a77ff7683c7e0dd89c55325508948a8a4d988fe0629b898cc93a982ac275b"} Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.180815 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lf6rg" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.334287 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83f4615e-8b8d-420d-ad49-9eaf51763d66-operator-scripts\") pod \"83f4615e-8b8d-420d-ad49-9eaf51763d66\" (UID: \"83f4615e-8b8d-420d-ad49-9eaf51763d66\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.334429 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv557\" (UniqueName: \"kubernetes.io/projected/83f4615e-8b8d-420d-ad49-9eaf51763d66-kube-api-access-lv557\") pod \"83f4615e-8b8d-420d-ad49-9eaf51763d66\" (UID: \"83f4615e-8b8d-420d-ad49-9eaf51763d66\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.338563 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83f4615e-8b8d-420d-ad49-9eaf51763d66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83f4615e-8b8d-420d-ad49-9eaf51763d66" (UID: "83f4615e-8b8d-420d-ad49-9eaf51763d66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.342397 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83f4615e-8b8d-420d-ad49-9eaf51763d66-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.390246 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f4615e-8b8d-420d-ad49-9eaf51763d66-kube-api-access-lv557" (OuterVolumeSpecName: "kube-api-access-lv557") pod "83f4615e-8b8d-420d-ad49-9eaf51763d66" (UID: "83f4615e-8b8d-420d-ad49-9eaf51763d66"). InnerVolumeSpecName "kube-api-access-lv557". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.449425 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv557\" (UniqueName: \"kubernetes.io/projected/83f4615e-8b8d-420d-ad49-9eaf51763d66-kube-api-access-lv557\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.527079 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" event={"ID":"d3af4557-685c-47a8-b194-2eaa04edad39","Type":"ContainerDied","Data":"956a77ff7683c7e0dd89c55325508948a8a4d988fe0629b898cc93a982ac275b"} Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.528239 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956a77ff7683c7e0dd89c55325508948a8a4d988fe0629b898cc93a982ac275b" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.531167 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lf6rg" event={"ID":"83f4615e-8b8d-420d-ad49-9eaf51763d66","Type":"ContainerDied","Data":"f697d38763915bf202697fbcb33bdcb5042c2133a1a04459c90cdc0d4ef5efdb"} Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.531997 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f697d38763915bf202697fbcb33bdcb5042c2133a1a04459c90cdc0d4ef5efdb" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.532183 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lf6rg" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.539402 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" event={"ID":"fec66282-38cc-4eb2-ac47-c1a5a7b377f2","Type":"ContainerDied","Data":"3885bccd57a422b5aa7d88419a556b2e7441b54cea79dabcb2bd04ab37f17912"} Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.539464 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3885bccd57a422b5aa7d88419a556b2e7441b54cea79dabcb2bd04ab37f17912" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.549145 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2308-account-create-update-d8cph" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.552357 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bkp5f" event={"ID":"c3a6db25-9cc7-4109-9aac-0b1b13e9082d","Type":"ContainerDied","Data":"cc30f96e82682444efa72982c75a263fce14ca27c1f6d49f0dbe05765597cb30"} Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.552419 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc30f96e82682444efa72982c75a263fce14ca27c1f6d49f0dbe05765597cb30" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.556716 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.558075 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2308-account-create-update-d8cph" event={"ID":"64b5aba9-30dc-4ef5-a103-c0a1a264abb1","Type":"ContainerDied","Data":"2ac67c4e146d697984f2a1dd97ceb67fc9d207ed9763b6f4e0eb07fe80deb101"} Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.558439 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ac67c4e146d697984f2a1dd97ceb67fc9d207ed9763b6f4e0eb07fe80deb101" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.559178 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2308-account-create-update-d8cph" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.597149 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jgkc9" event={"ID":"ce7acbd1-38a2-4fae-8c70-8d562c580274","Type":"ContainerDied","Data":"b39850150def229fd393d30e81aafee1eca054a44b06cff9cba763d9e731559c"} Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.597200 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39850150def229fd393d30e81aafee1eca054a44b06cff9cba763d9e731559c" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.597354 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.628270 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bkp5f" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.631254 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jgkc9" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.655909 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckl6s\" (UniqueName: \"kubernetes.io/projected/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-kube-api-access-ckl6s\") pod \"64b5aba9-30dc-4ef5-a103-c0a1a264abb1\" (UID: \"64b5aba9-30dc-4ef5-a103-c0a1a264abb1\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.655998 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2rmb\" (UniqueName: \"kubernetes.io/projected/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-kube-api-access-w2rmb\") pod \"fec66282-38cc-4eb2-ac47-c1a5a7b377f2\" (UID: \"fec66282-38cc-4eb2-ac47-c1a5a7b377f2\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.656086 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3af4557-685c-47a8-b194-2eaa04edad39-operator-scripts\") pod \"d3af4557-685c-47a8-b194-2eaa04edad39\" (UID: \"d3af4557-685c-47a8-b194-2eaa04edad39\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.656159 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-operator-scripts\") pod \"fec66282-38cc-4eb2-ac47-c1a5a7b377f2\" (UID: \"fec66282-38cc-4eb2-ac47-c1a5a7b377f2\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.656186 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-operator-scripts\") pod \"64b5aba9-30dc-4ef5-a103-c0a1a264abb1\" (UID: \"64b5aba9-30dc-4ef5-a103-c0a1a264abb1\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.656351 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2fq5\" (UniqueName: \"kubernetes.io/projected/d3af4557-685c-47a8-b194-2eaa04edad39-kube-api-access-t2fq5\") pod \"d3af4557-685c-47a8-b194-2eaa04edad39\" (UID: \"d3af4557-685c-47a8-b194-2eaa04edad39\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.656966 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3af4557-685c-47a8-b194-2eaa04edad39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3af4557-685c-47a8-b194-2eaa04edad39" (UID: "d3af4557-685c-47a8-b194-2eaa04edad39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.663775 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-kube-api-access-ckl6s" (OuterVolumeSpecName: "kube-api-access-ckl6s") pod "64b5aba9-30dc-4ef5-a103-c0a1a264abb1" (UID: "64b5aba9-30dc-4ef5-a103-c0a1a264abb1"). InnerVolumeSpecName "kube-api-access-ckl6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.670002 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fec66282-38cc-4eb2-ac47-c1a5a7b377f2" (UID: "fec66282-38cc-4eb2-ac47-c1a5a7b377f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.679762 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64b5aba9-30dc-4ef5-a103-c0a1a264abb1" (UID: "64b5aba9-30dc-4ef5-a103-c0a1a264abb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.673026 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3af4557-685c-47a8-b194-2eaa04edad39-kube-api-access-t2fq5" (OuterVolumeSpecName: "kube-api-access-t2fq5") pod "d3af4557-685c-47a8-b194-2eaa04edad39" (UID: "d3af4557-685c-47a8-b194-2eaa04edad39"). InnerVolumeSpecName "kube-api-access-t2fq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.702184 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-kube-api-access-w2rmb" (OuterVolumeSpecName: "kube-api-access-w2rmb") pod "fec66282-38cc-4eb2-ac47-c1a5a7b377f2" (UID: "fec66282-38cc-4eb2-ac47-c1a5a7b377f2"). InnerVolumeSpecName "kube-api-access-w2rmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.759813 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-operator-scripts\") pod \"c3a6db25-9cc7-4109-9aac-0b1b13e9082d\" (UID: \"c3a6db25-9cc7-4109-9aac-0b1b13e9082d\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.760202 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt8mm\" (UniqueName: \"kubernetes.io/projected/ce7acbd1-38a2-4fae-8c70-8d562c580274-kube-api-access-xt8mm\") pod \"ce7acbd1-38a2-4fae-8c70-8d562c580274\" (UID: \"ce7acbd1-38a2-4fae-8c70-8d562c580274\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.760245 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58qbx\" (UniqueName: \"kubernetes.io/projected/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-kube-api-access-58qbx\") pod \"c3a6db25-9cc7-4109-9aac-0b1b13e9082d\" (UID: \"c3a6db25-9cc7-4109-9aac-0b1b13e9082d\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.760349 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce7acbd1-38a2-4fae-8c70-8d562c580274-operator-scripts\") pod \"ce7acbd1-38a2-4fae-8c70-8d562c580274\" (UID: \"ce7acbd1-38a2-4fae-8c70-8d562c580274\") " Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.760875 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckl6s\" (UniqueName: \"kubernetes.io/projected/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-kube-api-access-ckl6s\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.760892 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2rmb\" (UniqueName: \"kubernetes.io/projected/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-kube-api-access-w2rmb\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.760906 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3af4557-685c-47a8-b194-2eaa04edad39-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.760918 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fec66282-38cc-4eb2-ac47-c1a5a7b377f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.760935 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b5aba9-30dc-4ef5-a103-c0a1a264abb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.760950 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2fq5\" (UniqueName: \"kubernetes.io/projected/d3af4557-685c-47a8-b194-2eaa04edad39-kube-api-access-t2fq5\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.761579 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7acbd1-38a2-4fae-8c70-8d562c580274-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce7acbd1-38a2-4fae-8c70-8d562c580274" (UID: "ce7acbd1-38a2-4fae-8c70-8d562c580274"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.763292 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3a6db25-9cc7-4109-9aac-0b1b13e9082d" (UID: "c3a6db25-9cc7-4109-9aac-0b1b13e9082d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.768956 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7acbd1-38a2-4fae-8c70-8d562c580274-kube-api-access-xt8mm" (OuterVolumeSpecName: "kube-api-access-xt8mm") pod "ce7acbd1-38a2-4fae-8c70-8d562c580274" (UID: "ce7acbd1-38a2-4fae-8c70-8d562c580274"). InnerVolumeSpecName "kube-api-access-xt8mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.781256 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-kube-api-access-58qbx" (OuterVolumeSpecName: "kube-api-access-58qbx") pod "c3a6db25-9cc7-4109-9aac-0b1b13e9082d" (UID: "c3a6db25-9cc7-4109-9aac-0b1b13e9082d"). InnerVolumeSpecName "kube-api-access-58qbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.862232 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce7acbd1-38a2-4fae-8c70-8d562c580274-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.862273 4625 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.862285 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt8mm\" (UniqueName: \"kubernetes.io/projected/ce7acbd1-38a2-4fae-8c70-8d562c580274-kube-api-access-xt8mm\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:58 crc kubenswrapper[4625]: I1202 14:06:58.862301 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58qbx\" (UniqueName: \"kubernetes.io/projected/c3a6db25-9cc7-4109-9aac-0b1b13e9082d-kube-api-access-58qbx\") on node \"crc\" DevicePath \"\"" Dec 02 14:06:59 crc kubenswrapper[4625]: I1202 14:06:59.620112 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bkp5f" Dec 02 14:06:59 crc kubenswrapper[4625]: I1202 14:06:59.620149 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12a0-account-create-update-2jk9s" Dec 02 14:06:59 crc kubenswrapper[4625]: I1202 14:06:59.620168 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jgkc9" Dec 02 14:06:59 crc kubenswrapper[4625]: I1202 14:06:59.620250 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6cf2-account-create-update-jgt9w" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.905058 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-52qdn"] Dec 02 14:07:03 crc kubenswrapper[4625]: E1202 14:07:03.906477 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b5aba9-30dc-4ef5-a103-c0a1a264abb1" containerName="mariadb-account-create-update" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906498 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b5aba9-30dc-4ef5-a103-c0a1a264abb1" containerName="mariadb-account-create-update" Dec 02 14:07:03 crc kubenswrapper[4625]: E1202 14:07:03.906525 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7acbd1-38a2-4fae-8c70-8d562c580274" containerName="mariadb-database-create" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906534 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7acbd1-38a2-4fae-8c70-8d562c580274" containerName="mariadb-database-create" Dec 02 14:07:03 crc kubenswrapper[4625]: E1202 14:07:03.906552 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f4615e-8b8d-420d-ad49-9eaf51763d66" containerName="mariadb-database-create" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906561 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f4615e-8b8d-420d-ad49-9eaf51763d66" containerName="mariadb-database-create" Dec 02 14:07:03 crc kubenswrapper[4625]: E1202 14:07:03.906577 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a6db25-9cc7-4109-9aac-0b1b13e9082d" containerName="mariadb-database-create" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906584 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a6db25-9cc7-4109-9aac-0b1b13e9082d" containerName="mariadb-database-create" Dec 02 14:07:03 crc kubenswrapper[4625]: E1202 14:07:03.906606 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec66282-38cc-4eb2-ac47-c1a5a7b377f2" containerName="mariadb-account-create-update" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906614 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec66282-38cc-4eb2-ac47-c1a5a7b377f2" containerName="mariadb-account-create-update" Dec 02 14:07:03 crc kubenswrapper[4625]: E1202 14:07:03.906630 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3af4557-685c-47a8-b194-2eaa04edad39" containerName="mariadb-account-create-update" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906637 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3af4557-685c-47a8-b194-2eaa04edad39" containerName="mariadb-account-create-update" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906877 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7acbd1-38a2-4fae-8c70-8d562c580274" containerName="mariadb-database-create" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906901 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a6db25-9cc7-4109-9aac-0b1b13e9082d" containerName="mariadb-database-create" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906914 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3af4557-685c-47a8-b194-2eaa04edad39" containerName="mariadb-account-create-update" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906928 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="83f4615e-8b8d-420d-ad49-9eaf51763d66" containerName="mariadb-database-create" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906938 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec66282-38cc-4eb2-ac47-c1a5a7b377f2" containerName="mariadb-account-create-update" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.906958 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b5aba9-30dc-4ef5-a103-c0a1a264abb1" containerName="mariadb-account-create-update" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.907777 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.934021 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-52qdn"] Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.939062 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.939147 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zfgtr" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.939437 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.979086 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.979152 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-config-data\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.979251 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-scripts\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:03 crc kubenswrapper[4625]: I1202 14:07:03.979343 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbnh6\" (UniqueName: \"kubernetes.io/projected/f780395c-9363-4a42-9f25-f7ad97bc51b3-kube-api-access-wbnh6\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.081704 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbnh6\" (UniqueName: \"kubernetes.io/projected/f780395c-9363-4a42-9f25-f7ad97bc51b3-kube-api-access-wbnh6\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.081839 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.081883 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-config-data\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.081984 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-scripts\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.092418 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-scripts\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.092439 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.113356 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-config-data\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.114329 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbnh6\" (UniqueName: \"kubernetes.io/projected/f780395c-9363-4a42-9f25-f7ad97bc51b3-kube-api-access-wbnh6\") pod \"nova-cell0-conductor-db-sync-52qdn\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.233384 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.797410 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-52qdn"] Dec 02 14:07:04 crc kubenswrapper[4625]: I1202 14:07:04.840765 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:07:05 crc kubenswrapper[4625]: I1202 14:07:05.718479 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-52qdn" event={"ID":"f780395c-9363-4a42-9f25-f7ad97bc51b3","Type":"ContainerStarted","Data":"d5d745600a92eb58047c66cd70d5359b19e345cb03c06b0cae7100d2d33caec5"} Dec 02 14:07:06 crc kubenswrapper[4625]: I1202 14:07:06.099834 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 02 14:07:06 crc kubenswrapper[4625]: I1202 14:07:06.212440 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 02 14:07:06 crc kubenswrapper[4625]: I1202 14:07:06.358765 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7dc4db5bfb-zbs4l" podUID="92339196-3d33-4b76-9ba2-81e1a8373e84" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 02 14:07:07 crc kubenswrapper[4625]: I1202 14:07:07.774436 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7e342617-f071-4967-a02d-38534c2c7c11","Type":"ContainerStarted","Data":"aeba371cd491b943d9e95d771bc2e50b7245caf5715ef99e261de200ab089e67"} Dec 02 14:07:07 crc kubenswrapper[4625]: I1202 14:07:07.799400 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=9.609682925 podStartE2EDuration="42.799374317s" podCreationTimestamp="2025-12-02 14:06:25 +0000 UTC" firstStartedPulling="2025-12-02 14:06:33.200143419 +0000 UTC m=+1349.162320494" lastFinishedPulling="2025-12-02 14:07:06.389834811 +0000 UTC m=+1382.352011886" observedRunningTime="2025-12-02 14:07:07.793139747 +0000 UTC m=+1383.755316822" watchObservedRunningTime="2025-12-02 14:07:07.799374317 +0000 UTC m=+1383.761551392" Dec 02 14:07:13 crc kubenswrapper[4625]: I1202 14:07:13.894217 4625 generic.go:334] "Generic (PLEG): container finished" podID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerID="20253212cab0b0a5c7a4868bbb30104896188f6183cdc98db39f60492180d611" exitCode=137 Dec 02 14:07:13 crc kubenswrapper[4625]: I1202 14:07:13.894583 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef5cc506-cd7a-493f-a749-c39160dfe5bd","Type":"ContainerDied","Data":"20253212cab0b0a5c7a4868bbb30104896188f6183cdc98db39f60492180d611"} Dec 02 14:07:15 crc kubenswrapper[4625]: I1202 14:07:15.872665 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:07:15 crc kubenswrapper[4625]: I1202 14:07:15.873803 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" containerName="glance-httpd" containerID="cri-o://5db026b1ff3c76612f197105849c16890f68a14a2db39a36371700cc1d4b55a0" gracePeriod=30 Dec 02 14:07:15 crc kubenswrapper[4625]: I1202 14:07:15.874131 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" containerName="glance-log" containerID="cri-o://56c3f6cb475a41b5dd8f73fb6596069ce9425fc8aaba665f13f9e58eda5f2976" gracePeriod=30 Dec 02 14:07:16 crc kubenswrapper[4625]: I1202 14:07:16.955070 4625 generic.go:334] "Generic (PLEG): container finished" podID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" containerID="56c3f6cb475a41b5dd8f73fb6596069ce9425fc8aaba665f13f9e58eda5f2976" exitCode=143 Dec 02 14:07:16 crc kubenswrapper[4625]: I1202 14:07:16.955152 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d","Type":"ContainerDied","Data":"56c3f6cb475a41b5dd8f73fb6596069ce9425fc8aaba665f13f9e58eda5f2976"} Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.759395 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.826889 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-log-httpd\") pod \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.827716 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ef5cc506-cd7a-493f-a749-c39160dfe5bd" (UID: "ef5cc506-cd7a-493f-a749-c39160dfe5bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.828536 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksnzm\" (UniqueName: \"kubernetes.io/projected/ef5cc506-cd7a-493f-a749-c39160dfe5bd-kube-api-access-ksnzm\") pod \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.828690 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-sg-core-conf-yaml\") pod \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.828732 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-run-httpd\") pod \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.828775 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-config-data\") pod \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.828809 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-combined-ca-bundle\") pod \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.828842 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-scripts\") pod \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\" (UID: \"ef5cc506-cd7a-493f-a749-c39160dfe5bd\") " Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.830598 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ef5cc506-cd7a-493f-a749-c39160dfe5bd" (UID: "ef5cc506-cd7a-493f-a749-c39160dfe5bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.837171 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-scripts" (OuterVolumeSpecName: "scripts") pod "ef5cc506-cd7a-493f-a749-c39160dfe5bd" (UID: "ef5cc506-cd7a-493f-a749-c39160dfe5bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.841801 4625 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.841850 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.841862 4625 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef5cc506-cd7a-493f-a749-c39160dfe5bd-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.845540 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef5cc506-cd7a-493f-a749-c39160dfe5bd-kube-api-access-ksnzm" (OuterVolumeSpecName: "kube-api-access-ksnzm") pod "ef5cc506-cd7a-493f-a749-c39160dfe5bd" (UID: "ef5cc506-cd7a-493f-a749-c39160dfe5bd"). InnerVolumeSpecName "kube-api-access-ksnzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.938637 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ef5cc506-cd7a-493f-a749-c39160dfe5bd" (UID: "ef5cc506-cd7a-493f-a749-c39160dfe5bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.949580 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksnzm\" (UniqueName: \"kubernetes.io/projected/ef5cc506-cd7a-493f-a749-c39160dfe5bd-kube-api-access-ksnzm\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.949627 4625 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.991029 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef5cc506-cd7a-493f-a749-c39160dfe5bd","Type":"ContainerDied","Data":"c89bc10352aba387442483c0b537429b7030255f0324f0c0b63fd7bc19fb7f97"} Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.991096 4625 scope.go:117] "RemoveContainer" containerID="20253212cab0b0a5c7a4868bbb30104896188f6183cdc98db39f60492180d611" Dec 02 14:07:18 crc kubenswrapper[4625]: I1202 14:07:18.991260 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.031963 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef5cc506-cd7a-493f-a749-c39160dfe5bd" (UID: "ef5cc506-cd7a-493f-a749-c39160dfe5bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.034596 4625 scope.go:117] "RemoveContainer" containerID="c6039b05e41ef7d4ae61f2fb41e8fe676f5a65f8a9d572642295581b230194b1" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.052925 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.053353 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-config-data" (OuterVolumeSpecName: "config-data") pod "ef5cc506-cd7a-493f-a749-c39160dfe5bd" (UID: "ef5cc506-cd7a-493f-a749-c39160dfe5bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.084985 4625 scope.go:117] "RemoveContainer" containerID="c3f6070b551ac1178afe089b224b75519f00bd2d6100a0ac7681cd48723ced8c" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.121508 4625 scope.go:117] "RemoveContainer" containerID="dd4ad26cf876909ee190fd42b43eb5f98091246439ef5bdb949204d88965cb80" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.155192 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef5cc506-cd7a-493f-a749-c39160dfe5bd-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.331382 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.341419 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.355722 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.356060 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="39cda872-2de1-4f58-9eda-16328ffa31ac" containerName="glance-log" containerID="cri-o://fd2db654ed13b96cd00c49d3e65ecd785026bd289707eb086e72d1ca866e19a2" gracePeriod=30 Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.356188 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="39cda872-2de1-4f58-9eda-16328ffa31ac" containerName="glance-httpd" containerID="cri-o://d74e9a467dab30b830e84a2a40c7115b6c525d66bc102bfb933e87f700ba68e0" gracePeriod=30 Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.391669 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:19 crc kubenswrapper[4625]: E1202 14:07:19.392173 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="ceilometer-central-agent" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.392203 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="ceilometer-central-agent" Dec 02 14:07:19 crc kubenswrapper[4625]: E1202 14:07:19.392220 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="ceilometer-notification-agent" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.392230 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="ceilometer-notification-agent" Dec 02 14:07:19 crc kubenswrapper[4625]: E1202 14:07:19.392277 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="sg-core" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.392287 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="sg-core" Dec 02 14:07:19 crc kubenswrapper[4625]: E1202 14:07:19.392297 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="proxy-httpd" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.392304 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="proxy-httpd" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.392526 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="sg-core" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.392543 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="proxy-httpd" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.392556 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="ceilometer-central-agent" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.392572 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" containerName="ceilometer-notification-agent" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.394597 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.397810 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.402690 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.432170 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.462534 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-scripts\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.462591 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-config-data\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.462732 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-log-httpd\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.462794 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm62p\" (UniqueName: \"kubernetes.io/projected/60abedde-73d4-45e3-b0af-01d4dc811052-kube-api-access-bm62p\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.462884 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-run-httpd\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.462918 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.462949 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.567217 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-log-httpd\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.570035 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-log-httpd\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.570117 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm62p\" (UniqueName: \"kubernetes.io/projected/60abedde-73d4-45e3-b0af-01d4dc811052-kube-api-access-bm62p\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.570304 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-run-httpd\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.570406 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.570446 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.570571 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-scripts\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.570598 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-config-data\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.573145 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-run-httpd\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.584200 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-scripts\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.584691 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.591835 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-config-data\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.597116 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.598039 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm62p\" (UniqueName: \"kubernetes.io/projected/60abedde-73d4-45e3-b0af-01d4dc811052-kube-api-access-bm62p\") pod \"ceilometer-0\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " pod="openstack/ceilometer-0" Dec 02 14:07:19 crc kubenswrapper[4625]: I1202 14:07:19.723060 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.052751 4625 generic.go:334] "Generic (PLEG): container finished" podID="39cda872-2de1-4f58-9eda-16328ffa31ac" containerID="fd2db654ed13b96cd00c49d3e65ecd785026bd289707eb086e72d1ca866e19a2" exitCode=143 Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.053287 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"39cda872-2de1-4f58-9eda-16328ffa31ac","Type":"ContainerDied","Data":"fd2db654ed13b96cd00c49d3e65ecd785026bd289707eb086e72d1ca866e19a2"} Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.064782 4625 generic.go:334] "Generic (PLEG): container finished" podID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" containerID="5db026b1ff3c76612f197105849c16890f68a14a2db39a36371700cc1d4b55a0" exitCode=0 Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.064876 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d","Type":"ContainerDied","Data":"5db026b1ff3c76612f197105849c16890f68a14a2db39a36371700cc1d4b55a0"} Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.067755 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-52qdn" event={"ID":"f780395c-9363-4a42-9f25-f7ad97bc51b3","Type":"ContainerStarted","Data":"b179ab67ebb82e13e7582b87e6336ef7e1c0a271dbe138fd902e3eabb9740a9f"} Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.151439 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-52qdn" podStartSLOduration=3.332397936 podStartE2EDuration="17.151386078s" podCreationTimestamp="2025-12-02 14:07:03 +0000 UTC" firstStartedPulling="2025-12-02 14:07:04.840442942 +0000 UTC m=+1380.802620017" lastFinishedPulling="2025-12-02 14:07:18.659431084 +0000 UTC m=+1394.621608159" observedRunningTime="2025-12-02 14:07:20.092928176 +0000 UTC m=+1396.055105251" watchObservedRunningTime="2025-12-02 14:07:20.151386078 +0000 UTC m=+1396.113563153" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.242136 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.318616 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.464589 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.502491 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.612557 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-public-tls-certs\") pod \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.613147 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-combined-ca-bundle\") pod \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.613210 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-httpd-run\") pod \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.613357 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69sjb\" (UniqueName: \"kubernetes.io/projected/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-kube-api-access-69sjb\") pod \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.613576 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.613634 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-logs\") pod \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.613671 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-config-data\") pod \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.613744 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-scripts\") pod \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\" (UID: \"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d\") " Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.614200 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" (UID: "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.614668 4625 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.616092 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-logs" (OuterVolumeSpecName: "logs") pod "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" (UID: "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.626699 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-scripts" (OuterVolumeSpecName: "scripts") pod "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" (UID: "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.633606 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-kube-api-access-69sjb" (OuterVolumeSpecName: "kube-api-access-69sjb") pod "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" (UID: "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d"). InnerVolumeSpecName "kube-api-access-69sjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.666570 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" (UID: "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.716926 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69sjb\" (UniqueName: \"kubernetes.io/projected/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-kube-api-access-69sjb\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.716986 4625 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.717000 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.717009 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.765851 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" (UID: "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.766559 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" (UID: "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.797929 4625 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.804554 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-config-data" (OuterVolumeSpecName: "config-data") pod "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" (UID: "fdb747fd-6e9d-4335-8b52-0fb8f42fd68d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.821904 4625 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.821952 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.821966 4625 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.821977 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:20 crc kubenswrapper[4625]: I1202 14:07:20.881689 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef5cc506-cd7a-493f-a749-c39160dfe5bd" path="/var/lib/kubelet/pods/ef5cc506-cd7a-493f-a749-c39160dfe5bd/volumes" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.085116 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fdb747fd-6e9d-4335-8b52-0fb8f42fd68d","Type":"ContainerDied","Data":"90c6b34f897c35f7e70ef83d1e4d9626d80b83568e1716a598a80217f7c6eaf9"} Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.085182 4625 scope.go:117] "RemoveContainer" containerID="5db026b1ff3c76612f197105849c16890f68a14a2db39a36371700cc1d4b55a0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.085434 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.095695 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60abedde-73d4-45e3-b0af-01d4dc811052","Type":"ContainerStarted","Data":"cc1676e2771d91bf7406bb9dadbbc00b5b5c5805110d3f25743d807f306fc44a"} Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.138382 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.145038 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.177667 4625 scope.go:117] "RemoveContainer" containerID="56c3f6cb475a41b5dd8f73fb6596069ce9425fc8aaba665f13f9e58eda5f2976" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.211375 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:07:21 crc kubenswrapper[4625]: E1202 14:07:21.212850 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" containerName="glance-log" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.213174 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" containerName="glance-log" Dec 02 14:07:21 crc kubenswrapper[4625]: E1202 14:07:21.215629 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" containerName="glance-httpd" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.215683 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" containerName="glance-httpd" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.216233 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" containerName="glance-log" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.216269 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" containerName="glance-httpd" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.222000 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.229612 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.230861 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.251480 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.349177 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.349877 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-scripts\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.350093 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-config-data\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.350176 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-logs\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.350228 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.350289 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.350369 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprkn\" (UniqueName: \"kubernetes.io/projected/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-kube-api-access-pprkn\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.350410 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.460564 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-logs\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.460893 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.461209 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.461531 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pprkn\" (UniqueName: \"kubernetes.io/projected/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-kube-api-access-pprkn\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.472955 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.473232 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.473257 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-scripts\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.473420 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.473478 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-config-data\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.474002 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.462107 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-logs\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.479630 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.481266 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-scripts\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.483270 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.484532 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-config-data\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.490059 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pprkn\" (UniqueName: \"kubernetes.io/projected/7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73-kube-api-access-pprkn\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.524769 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73\") " pod="openstack/glance-default-external-api-0" Dec 02 14:07:21 crc kubenswrapper[4625]: I1202 14:07:21.574298 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 02 14:07:22 crc kubenswrapper[4625]: I1202 14:07:22.125272 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60abedde-73d4-45e3-b0af-01d4dc811052","Type":"ContainerStarted","Data":"e031af8c78503240257740187e5cd8e4d65a9e2361ee6010e82573b660d1a127"} Dec 02 14:07:22 crc kubenswrapper[4625]: I1202 14:07:22.316864 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 02 14:07:22 crc kubenswrapper[4625]: I1202 14:07:22.879942 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb747fd-6e9d-4335-8b52-0fb8f42fd68d" path="/var/lib/kubelet/pods/fdb747fd-6e9d-4335-8b52-0fb8f42fd68d/volumes" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.298273 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60abedde-73d4-45e3-b0af-01d4dc811052","Type":"ContainerStarted","Data":"a9dda04fd0be95667d00377843ac7b036bf39fcd4d32d6ff03879966c81ea2fa"} Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.300848 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73","Type":"ContainerStarted","Data":"5c9149d126736a08f1b92624ef2c0fd13c1f8667b61aed3402ad40f9a5bda0a8"} Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.307877 4625 generic.go:334] "Generic (PLEG): container finished" podID="39cda872-2de1-4f58-9eda-16328ffa31ac" containerID="d74e9a467dab30b830e84a2a40c7115b6c525d66bc102bfb933e87f700ba68e0" exitCode=0 Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.307945 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"39cda872-2de1-4f58-9eda-16328ffa31ac","Type":"ContainerDied","Data":"d74e9a467dab30b830e84a2a40c7115b6c525d66bc102bfb933e87f700ba68e0"} Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.555213 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.667404 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-combined-ca-bundle\") pod \"39cda872-2de1-4f58-9eda-16328ffa31ac\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.667845 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-config-data\") pod \"39cda872-2de1-4f58-9eda-16328ffa31ac\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.668016 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-httpd-run\") pod \"39cda872-2de1-4f58-9eda-16328ffa31ac\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.668166 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"39cda872-2de1-4f58-9eda-16328ffa31ac\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.668334 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-logs\") pod \"39cda872-2de1-4f58-9eda-16328ffa31ac\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.668487 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-internal-tls-certs\") pod \"39cda872-2de1-4f58-9eda-16328ffa31ac\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.668609 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxbn5\" (UniqueName: \"kubernetes.io/projected/39cda872-2de1-4f58-9eda-16328ffa31ac-kube-api-access-hxbn5\") pod \"39cda872-2de1-4f58-9eda-16328ffa31ac\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.668704 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-scripts\") pod \"39cda872-2de1-4f58-9eda-16328ffa31ac\" (UID: \"39cda872-2de1-4f58-9eda-16328ffa31ac\") " Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.669946 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-logs" (OuterVolumeSpecName: "logs") pod "39cda872-2de1-4f58-9eda-16328ffa31ac" (UID: "39cda872-2de1-4f58-9eda-16328ffa31ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.672568 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "39cda872-2de1-4f58-9eda-16328ffa31ac" (UID: "39cda872-2de1-4f58-9eda-16328ffa31ac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.714792 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-scripts" (OuterVolumeSpecName: "scripts") pod "39cda872-2de1-4f58-9eda-16328ffa31ac" (UID: "39cda872-2de1-4f58-9eda-16328ffa31ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.723101 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39cda872-2de1-4f58-9eda-16328ffa31ac-kube-api-access-hxbn5" (OuterVolumeSpecName: "kube-api-access-hxbn5") pod "39cda872-2de1-4f58-9eda-16328ffa31ac" (UID: "39cda872-2de1-4f58-9eda-16328ffa31ac"). InnerVolumeSpecName "kube-api-access-hxbn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.770817 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "39cda872-2de1-4f58-9eda-16328ffa31ac" (UID: "39cda872-2de1-4f58-9eda-16328ffa31ac"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.771293 4625 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.771341 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.771350 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxbn5\" (UniqueName: \"kubernetes.io/projected/39cda872-2de1-4f58-9eda-16328ffa31ac-kube-api-access-hxbn5\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.771362 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.771371 4625 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39cda872-2de1-4f58-9eda-16328ffa31ac-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.828398 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39cda872-2de1-4f58-9eda-16328ffa31ac" (UID: "39cda872-2de1-4f58-9eda-16328ffa31ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.872987 4625 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.880659 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.880711 4625 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.924529 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-config-data" (OuterVolumeSpecName: "config-data") pod "39cda872-2de1-4f58-9eda-16328ffa31ac" (UID: "39cda872-2de1-4f58-9eda-16328ffa31ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.942268 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "39cda872-2de1-4f58-9eda-16328ffa31ac" (UID: "39cda872-2de1-4f58-9eda-16328ffa31ac"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.985884 4625 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:23 crc kubenswrapper[4625]: I1202 14:07:23.985957 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39cda872-2de1-4f58-9eda-16328ffa31ac-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.230657 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.318653 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73","Type":"ContainerStarted","Data":"51e0a845a0eaf9205de8cd322bef2c57b7f40b1e884e17e3b87385127ecfd225"} Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.322402 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"39cda872-2de1-4f58-9eda-16328ffa31ac","Type":"ContainerDied","Data":"cffdbb07ec9c92386cae0863e94f8961c3231c4ad73e5dcad2a8d19e25e7115c"} Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.322444 4625 scope.go:117] "RemoveContainer" containerID="d74e9a467dab30b830e84a2a40c7115b6c525d66bc102bfb933e87f700ba68e0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.322605 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.344066 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7dc4db5bfb-zbs4l" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.373379 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.396204 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.397565 4625 scope.go:117] "RemoveContainer" containerID="fd2db654ed13b96cd00c49d3e65ecd785026bd289707eb086e72d1ca866e19a2" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.431410 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:07:24 crc kubenswrapper[4625]: E1202 14:07:24.432000 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39cda872-2de1-4f58-9eda-16328ffa31ac" containerName="glance-httpd" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.432034 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="39cda872-2de1-4f58-9eda-16328ffa31ac" containerName="glance-httpd" Dec 02 14:07:24 crc kubenswrapper[4625]: E1202 14:07:24.432068 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39cda872-2de1-4f58-9eda-16328ffa31ac" containerName="glance-log" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.432077 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="39cda872-2de1-4f58-9eda-16328ffa31ac" containerName="glance-log" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.432912 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="39cda872-2de1-4f58-9eda-16328ffa31ac" containerName="glance-log" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.432953 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="39cda872-2de1-4f58-9eda-16328ffa31ac" containerName="glance-httpd" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.434251 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.445069 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.445336 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.468060 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.482052 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.603803 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.604666 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7831eff5-dd90-4e3d-b6ec-86ec291099f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.604785 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.605118 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7831eff5-dd90-4e3d-b6ec-86ec291099f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.605490 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.605700 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.605819 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.605934 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnhr\" (UniqueName: \"kubernetes.io/projected/7831eff5-dd90-4e3d-b6ec-86ec291099f2-kube-api-access-5vnhr\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.611412 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c94877878-jvhxv"] Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.707860 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7831eff5-dd90-4e3d-b6ec-86ec291099f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.707982 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.708067 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.708096 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.708129 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vnhr\" (UniqueName: \"kubernetes.io/projected/7831eff5-dd90-4e3d-b6ec-86ec291099f2-kube-api-access-5vnhr\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.708222 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.708262 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7831eff5-dd90-4e3d-b6ec-86ec291099f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.708291 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.710426 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7831eff5-dd90-4e3d-b6ec-86ec291099f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.710838 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7831eff5-dd90-4e3d-b6ec-86ec291099f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.714018 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.725170 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.727883 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.741066 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.770810 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vnhr\" (UniqueName: \"kubernetes.io/projected/7831eff5-dd90-4e3d-b6ec-86ec291099f2-kube-api-access-5vnhr\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.775555 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7831eff5-dd90-4e3d-b6ec-86ec291099f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.815879 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7831eff5-dd90-4e3d-b6ec-86ec291099f2\") " pod="openstack/glance-default-internal-api-0" Dec 02 14:07:24 crc kubenswrapper[4625]: I1202 14:07:24.921368 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39cda872-2de1-4f58-9eda-16328ffa31ac" path="/var/lib/kubelet/pods/39cda872-2de1-4f58-9eda-16328ffa31ac/volumes" Dec 02 14:07:25 crc kubenswrapper[4625]: I1202 14:07:25.072394 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:25 crc kubenswrapper[4625]: I1202 14:07:25.391940 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60abedde-73d4-45e3-b0af-01d4dc811052","Type":"ContainerStarted","Data":"b6308d3ce761a70b9fcedea9dbaecc1f053ba2455fffeaafa228e8be09cb9a64"} Dec 02 14:07:25 crc kubenswrapper[4625]: I1202 14:07:25.401033 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon-log" containerID="cri-o://c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805" gracePeriod=30 Dec 02 14:07:25 crc kubenswrapper[4625]: I1202 14:07:25.401505 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" containerID="cri-o://0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47" gracePeriod=30 Dec 02 14:07:25 crc kubenswrapper[4625]: I1202 14:07:25.748299 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 02 14:07:26 crc kubenswrapper[4625]: I1202 14:07:26.422209 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7831eff5-dd90-4e3d-b6ec-86ec291099f2","Type":"ContainerStarted","Data":"bc5ceaf29924bff77dcebe4e05bbf7d057f5ab57c416bee7e39399096aa25c1c"} Dec 02 14:07:26 crc kubenswrapper[4625]: I1202 14:07:26.427192 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73","Type":"ContainerStarted","Data":"081a592cbe025b7800686db1b435e29ec612428d50eebf109f8398f5eaaa591c"} Dec 02 14:07:27 crc kubenswrapper[4625]: I1202 14:07:27.441899 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="ceilometer-central-agent" containerID="cri-o://e031af8c78503240257740187e5cd8e4d65a9e2361ee6010e82573b660d1a127" gracePeriod=30 Dec 02 14:07:27 crc kubenswrapper[4625]: I1202 14:07:27.442295 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60abedde-73d4-45e3-b0af-01d4dc811052","Type":"ContainerStarted","Data":"018fb5485d875bc5adb75d132556dabd5932adae2be5b5c5fdc6ada031a784ab"} Dec 02 14:07:27 crc kubenswrapper[4625]: I1202 14:07:27.442566 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="proxy-httpd" containerID="cri-o://018fb5485d875bc5adb75d132556dabd5932adae2be5b5c5fdc6ada031a784ab" gracePeriod=30 Dec 02 14:07:27 crc kubenswrapper[4625]: I1202 14:07:27.442650 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="sg-core" containerID="cri-o://b6308d3ce761a70b9fcedea9dbaecc1f053ba2455fffeaafa228e8be09cb9a64" gracePeriod=30 Dec 02 14:07:27 crc kubenswrapper[4625]: I1202 14:07:27.442672 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="ceilometer-notification-agent" containerID="cri-o://a9dda04fd0be95667d00377843ac7b036bf39fcd4d32d6ff03879966c81ea2fa" gracePeriod=30 Dec 02 14:07:27 crc kubenswrapper[4625]: I1202 14:07:27.442705 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:07:27 crc kubenswrapper[4625]: I1202 14:07:27.452999 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7831eff5-dd90-4e3d-b6ec-86ec291099f2","Type":"ContainerStarted","Data":"154075c0f2d39791e87c547524523562d1562dce5d12bd4eac3f105fdb962694"} Dec 02 14:07:27 crc kubenswrapper[4625]: I1202 14:07:27.471248 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.426515371 podStartE2EDuration="8.471221119s" podCreationTimestamp="2025-12-02 14:07:19 +0000 UTC" firstStartedPulling="2025-12-02 14:07:20.525725597 +0000 UTC m=+1396.487902672" lastFinishedPulling="2025-12-02 14:07:26.570431345 +0000 UTC m=+1402.532608420" observedRunningTime="2025-12-02 14:07:27.467493227 +0000 UTC m=+1403.429670302" watchObservedRunningTime="2025-12-02 14:07:27.471221119 +0000 UTC m=+1403.433398194" Dec 02 14:07:27 crc kubenswrapper[4625]: I1202 14:07:27.485276 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.485241744 podStartE2EDuration="6.485241744s" podCreationTimestamp="2025-12-02 14:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:26.471845452 +0000 UTC m=+1402.434022527" watchObservedRunningTime="2025-12-02 14:07:27.485241744 +0000 UTC m=+1403.447418819" Dec 02 14:07:28 crc kubenswrapper[4625]: I1202 14:07:28.468040 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7831eff5-dd90-4e3d-b6ec-86ec291099f2","Type":"ContainerStarted","Data":"0b559d8ef9b3d5d2ac611513132ed62f39ac7bd9d852a55b1fd8b6e2553ce9e8"} Dec 02 14:07:28 crc kubenswrapper[4625]: I1202 14:07:28.475952 4625 generic.go:334] "Generic (PLEG): container finished" podID="60abedde-73d4-45e3-b0af-01d4dc811052" containerID="018fb5485d875bc5adb75d132556dabd5932adae2be5b5c5fdc6ada031a784ab" exitCode=0 Dec 02 14:07:28 crc kubenswrapper[4625]: I1202 14:07:28.475997 4625 generic.go:334] "Generic (PLEG): container finished" podID="60abedde-73d4-45e3-b0af-01d4dc811052" containerID="b6308d3ce761a70b9fcedea9dbaecc1f053ba2455fffeaafa228e8be09cb9a64" exitCode=2 Dec 02 14:07:28 crc kubenswrapper[4625]: I1202 14:07:28.476006 4625 generic.go:334] "Generic (PLEG): container finished" podID="60abedde-73d4-45e3-b0af-01d4dc811052" containerID="a9dda04fd0be95667d00377843ac7b036bf39fcd4d32d6ff03879966c81ea2fa" exitCode=0 Dec 02 14:07:28 crc kubenswrapper[4625]: I1202 14:07:28.476045 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60abedde-73d4-45e3-b0af-01d4dc811052","Type":"ContainerDied","Data":"018fb5485d875bc5adb75d132556dabd5932adae2be5b5c5fdc6ada031a784ab"} Dec 02 14:07:28 crc kubenswrapper[4625]: I1202 14:07:28.476127 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60abedde-73d4-45e3-b0af-01d4dc811052","Type":"ContainerDied","Data":"b6308d3ce761a70b9fcedea9dbaecc1f053ba2455fffeaafa228e8be09cb9a64"} Dec 02 14:07:28 crc kubenswrapper[4625]: I1202 14:07:28.476138 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60abedde-73d4-45e3-b0af-01d4dc811052","Type":"ContainerDied","Data":"a9dda04fd0be95667d00377843ac7b036bf39fcd4d32d6ff03879966c81ea2fa"} Dec 02 14:07:28 crc kubenswrapper[4625]: I1202 14:07:28.497993 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.497957275 podStartE2EDuration="4.497957275s" podCreationTimestamp="2025-12-02 14:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:28.493338249 +0000 UTC m=+1404.455515324" watchObservedRunningTime="2025-12-02 14:07:28.497957275 +0000 UTC m=+1404.460134350" Dec 02 14:07:28 crc kubenswrapper[4625]: I1202 14:07:28.576093 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:39194->10.217.0.142:8443: read: connection reset by peer" Dec 02 14:07:29 crc kubenswrapper[4625]: I1202 14:07:29.491390 4625 generic.go:334] "Generic (PLEG): container finished" podID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerID="0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47" exitCode=0 Dec 02 14:07:29 crc kubenswrapper[4625]: I1202 14:07:29.491462 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c94877878-jvhxv" event={"ID":"04b6d9a8-9eed-441e-a627-83774df65ed9","Type":"ContainerDied","Data":"0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47"} Dec 02 14:07:29 crc kubenswrapper[4625]: I1202 14:07:29.492059 4625 scope.go:117] "RemoveContainer" containerID="f7d7aff050b1cd68f760459d9ee8066bf44a2756b77213a691265525e661240d" Dec 02 14:07:31 crc kubenswrapper[4625]: I1202 14:07:31.574603 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 14:07:31 crc kubenswrapper[4625]: I1202 14:07:31.576385 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 02 14:07:31 crc kubenswrapper[4625]: I1202 14:07:31.612145 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 14:07:31 crc kubenswrapper[4625]: I1202 14:07:31.627981 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 02 14:07:32 crc kubenswrapper[4625]: I1202 14:07:32.538376 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 14:07:32 crc kubenswrapper[4625]: I1202 14:07:32.538437 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 02 14:07:34 crc kubenswrapper[4625]: I1202 14:07:34.581390 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:07:34 crc kubenswrapper[4625]: I1202 14:07:34.581914 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:07:35 crc kubenswrapper[4625]: I1202 14:07:35.072652 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:35 crc kubenswrapper[4625]: I1202 14:07:35.073021 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:35 crc kubenswrapper[4625]: I1202 14:07:35.093002 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 14:07:35 crc kubenswrapper[4625]: I1202 14:07:35.094106 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 02 14:07:35 crc kubenswrapper[4625]: I1202 14:07:35.137249 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:35 crc kubenswrapper[4625]: I1202 14:07:35.139845 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:35 crc kubenswrapper[4625]: I1202 14:07:35.593296 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:35 crc kubenswrapper[4625]: I1202 14:07:35.593408 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.096547 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.624522 4625 generic.go:334] "Generic (PLEG): container finished" podID="60abedde-73d4-45e3-b0af-01d4dc811052" containerID="e031af8c78503240257740187e5cd8e4d65a9e2361ee6010e82573b660d1a127" exitCode=0 Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.627005 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60abedde-73d4-45e3-b0af-01d4dc811052","Type":"ContainerDied","Data":"e031af8c78503240257740187e5cd8e4d65a9e2361ee6010e82573b660d1a127"} Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.848343 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.969849 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm62p\" (UniqueName: \"kubernetes.io/projected/60abedde-73d4-45e3-b0af-01d4dc811052-kube-api-access-bm62p\") pod \"60abedde-73d4-45e3-b0af-01d4dc811052\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.970033 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-config-data\") pod \"60abedde-73d4-45e3-b0af-01d4dc811052\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.970163 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-scripts\") pod \"60abedde-73d4-45e3-b0af-01d4dc811052\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.970214 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-run-httpd\") pod \"60abedde-73d4-45e3-b0af-01d4dc811052\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.970282 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-log-httpd\") pod \"60abedde-73d4-45e3-b0af-01d4dc811052\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.970381 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-combined-ca-bundle\") pod \"60abedde-73d4-45e3-b0af-01d4dc811052\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.970457 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-sg-core-conf-yaml\") pod \"60abedde-73d4-45e3-b0af-01d4dc811052\" (UID: \"60abedde-73d4-45e3-b0af-01d4dc811052\") " Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.972330 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "60abedde-73d4-45e3-b0af-01d4dc811052" (UID: "60abedde-73d4-45e3-b0af-01d4dc811052"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.972575 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "60abedde-73d4-45e3-b0af-01d4dc811052" (UID: "60abedde-73d4-45e3-b0af-01d4dc811052"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.993762 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60abedde-73d4-45e3-b0af-01d4dc811052-kube-api-access-bm62p" (OuterVolumeSpecName: "kube-api-access-bm62p") pod "60abedde-73d4-45e3-b0af-01d4dc811052" (UID: "60abedde-73d4-45e3-b0af-01d4dc811052"). InnerVolumeSpecName "kube-api-access-bm62p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:36 crc kubenswrapper[4625]: I1202 14:07:36.995602 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-scripts" (OuterVolumeSpecName: "scripts") pod "60abedde-73d4-45e3-b0af-01d4dc811052" (UID: "60abedde-73d4-45e3-b0af-01d4dc811052"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.033611 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "60abedde-73d4-45e3-b0af-01d4dc811052" (UID: "60abedde-73d4-45e3-b0af-01d4dc811052"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.072704 4625 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.072757 4625 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.072770 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm62p\" (UniqueName: \"kubernetes.io/projected/60abedde-73d4-45e3-b0af-01d4dc811052-kube-api-access-bm62p\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.072782 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.072790 4625 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60abedde-73d4-45e3-b0af-01d4dc811052-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.073276 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60abedde-73d4-45e3-b0af-01d4dc811052" (UID: "60abedde-73d4-45e3-b0af-01d4dc811052"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.131010 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-config-data" (OuterVolumeSpecName: "config-data") pod "60abedde-73d4-45e3-b0af-01d4dc811052" (UID: "60abedde-73d4-45e3-b0af-01d4dc811052"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.176367 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.176450 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60abedde-73d4-45e3-b0af-01d4dc811052-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.640538 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.642770 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60abedde-73d4-45e3-b0af-01d4dc811052","Type":"ContainerDied","Data":"cc1676e2771d91bf7406bb9dadbbc00b5b5c5805110d3f25743d807f306fc44a"} Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.642862 4625 scope.go:117] "RemoveContainer" containerID="018fb5485d875bc5adb75d132556dabd5932adae2be5b5c5fdc6ada031a784ab" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.650867 4625 generic.go:334] "Generic (PLEG): container finished" podID="f780395c-9363-4a42-9f25-f7ad97bc51b3" containerID="b179ab67ebb82e13e7582b87e6336ef7e1c0a271dbe138fd902e3eabb9740a9f" exitCode=0 Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.650917 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-52qdn" event={"ID":"f780395c-9363-4a42-9f25-f7ad97bc51b3","Type":"ContainerDied","Data":"b179ab67ebb82e13e7582b87e6336ef7e1c0a271dbe138fd902e3eabb9740a9f"} Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.687615 4625 scope.go:117] "RemoveContainer" containerID="b6308d3ce761a70b9fcedea9dbaecc1f053ba2455fffeaafa228e8be09cb9a64" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.711447 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.728795 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.773037 4625 scope.go:117] "RemoveContainer" containerID="a9dda04fd0be95667d00377843ac7b036bf39fcd4d32d6ff03879966c81ea2fa" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.777560 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:37 crc kubenswrapper[4625]: E1202 14:07:37.778131 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="sg-core" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.778158 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="sg-core" Dec 02 14:07:37 crc kubenswrapper[4625]: E1202 14:07:37.778180 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="ceilometer-notification-agent" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.778201 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="ceilometer-notification-agent" Dec 02 14:07:37 crc kubenswrapper[4625]: E1202 14:07:37.778252 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="ceilometer-central-agent" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.778261 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="ceilometer-central-agent" Dec 02 14:07:37 crc kubenswrapper[4625]: E1202 14:07:37.778271 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="proxy-httpd" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.778278 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="proxy-httpd" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.778540 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="ceilometer-notification-agent" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.778553 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="proxy-httpd" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.778567 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="sg-core" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.778579 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" containerName="ceilometer-central-agent" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.780614 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.783840 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.801205 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.805038 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.805259 4625 scope.go:117] "RemoveContainer" containerID="e031af8c78503240257740187e5cd8e4d65a9e2361ee6010e82573b660d1a127" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.892581 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-log-httpd\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.892650 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-scripts\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.892924 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.893127 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.893269 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-run-httpd\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.893336 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8cw8\" (UniqueName: \"kubernetes.io/projected/65a55923-8fce-4593-ab7c-5b399ceeddcf-kube-api-access-n8cw8\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.893507 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-config-data\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.997822 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.998061 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.998133 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-run-httpd\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.998174 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8cw8\" (UniqueName: \"kubernetes.io/projected/65a55923-8fce-4593-ab7c-5b399ceeddcf-kube-api-access-n8cw8\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.998212 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-config-data\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.998284 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-log-httpd\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.998344 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-scripts\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.999362 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-log-httpd\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:37 crc kubenswrapper[4625]: I1202 14:07:37.999763 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-run-httpd\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.008795 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-scripts\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.010858 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.023463 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-config-data\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.024080 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.028072 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8cw8\" (UniqueName: \"kubernetes.io/projected/65a55923-8fce-4593-ab7c-5b399ceeddcf-kube-api-access-n8cw8\") pod \"ceilometer-0\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " pod="openstack/ceilometer-0" Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.122754 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.150389 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.150563 4625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.399611 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.682782 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:07:38 crc kubenswrapper[4625]: W1202 14:07:38.687459 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65a55923_8fce_4593_ab7c_5b399ceeddcf.slice/crio-f9e6e215624d80ee110c8b074bf3d1a0c0d008207305b001de780417e9201f0e WatchSource:0}: Error finding container f9e6e215624d80ee110c8b074bf3d1a0c0d008207305b001de780417e9201f0e: Status 404 returned error can't find the container with id f9e6e215624d80ee110c8b074bf3d1a0c0d008207305b001de780417e9201f0e Dec 02 14:07:38 crc kubenswrapper[4625]: I1202 14:07:38.878246 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60abedde-73d4-45e3-b0af-01d4dc811052" path="/var/lib/kubelet/pods/60abedde-73d4-45e3-b0af-01d4dc811052/volumes" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.093922 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.232121 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbnh6\" (UniqueName: \"kubernetes.io/projected/f780395c-9363-4a42-9f25-f7ad97bc51b3-kube-api-access-wbnh6\") pod \"f780395c-9363-4a42-9f25-f7ad97bc51b3\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.232180 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-scripts\") pod \"f780395c-9363-4a42-9f25-f7ad97bc51b3\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.232207 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-combined-ca-bundle\") pod \"f780395c-9363-4a42-9f25-f7ad97bc51b3\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.232273 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-config-data\") pod \"f780395c-9363-4a42-9f25-f7ad97bc51b3\" (UID: \"f780395c-9363-4a42-9f25-f7ad97bc51b3\") " Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.240852 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f780395c-9363-4a42-9f25-f7ad97bc51b3-kube-api-access-wbnh6" (OuterVolumeSpecName: "kube-api-access-wbnh6") pod "f780395c-9363-4a42-9f25-f7ad97bc51b3" (UID: "f780395c-9363-4a42-9f25-f7ad97bc51b3"). InnerVolumeSpecName "kube-api-access-wbnh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.243284 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-scripts" (OuterVolumeSpecName: "scripts") pod "f780395c-9363-4a42-9f25-f7ad97bc51b3" (UID: "f780395c-9363-4a42-9f25-f7ad97bc51b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.270770 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-config-data" (OuterVolumeSpecName: "config-data") pod "f780395c-9363-4a42-9f25-f7ad97bc51b3" (UID: "f780395c-9363-4a42-9f25-f7ad97bc51b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.279147 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f780395c-9363-4a42-9f25-f7ad97bc51b3" (UID: "f780395c-9363-4a42-9f25-f7ad97bc51b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.335895 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbnh6\" (UniqueName: \"kubernetes.io/projected/f780395c-9363-4a42-9f25-f7ad97bc51b3-kube-api-access-wbnh6\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.336415 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.336430 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.336444 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f780395c-9363-4a42-9f25-f7ad97bc51b3-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.689290 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-52qdn" event={"ID":"f780395c-9363-4a42-9f25-f7ad97bc51b3","Type":"ContainerDied","Data":"d5d745600a92eb58047c66cd70d5359b19e345cb03c06b0cae7100d2d33caec5"} Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.689368 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d745600a92eb58047c66cd70d5359b19e345cb03c06b0cae7100d2d33caec5" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.689398 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-52qdn" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.690742 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a55923-8fce-4593-ab7c-5b399ceeddcf","Type":"ContainerStarted","Data":"bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59"} Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.690790 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a55923-8fce-4593-ab7c-5b399ceeddcf","Type":"ContainerStarted","Data":"f9e6e215624d80ee110c8b074bf3d1a0c0d008207305b001de780417e9201f0e"} Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.835816 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 14:07:39 crc kubenswrapper[4625]: E1202 14:07:39.836363 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f780395c-9363-4a42-9f25-f7ad97bc51b3" containerName="nova-cell0-conductor-db-sync" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.836525 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f780395c-9363-4a42-9f25-f7ad97bc51b3" containerName="nova-cell0-conductor-db-sync" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.836920 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f780395c-9363-4a42-9f25-f7ad97bc51b3" containerName="nova-cell0-conductor-db-sync" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.837733 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.843849 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.844266 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zfgtr" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.853842 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.879137 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc85f464-74b0-41f9-9997-21e67c3c7e3a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cc85f464-74b0-41f9-9997-21e67c3c7e3a\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.879881 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc85f464-74b0-41f9-9997-21e67c3c7e3a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cc85f464-74b0-41f9-9997-21e67c3c7e3a\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.880035 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpbsh\" (UniqueName: \"kubernetes.io/projected/cc85f464-74b0-41f9-9997-21e67c3c7e3a-kube-api-access-dpbsh\") pod \"nova-cell0-conductor-0\" (UID: \"cc85f464-74b0-41f9-9997-21e67c3c7e3a\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.984964 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc85f464-74b0-41f9-9997-21e67c3c7e3a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cc85f464-74b0-41f9-9997-21e67c3c7e3a\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.985077 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpbsh\" (UniqueName: \"kubernetes.io/projected/cc85f464-74b0-41f9-9997-21e67c3c7e3a-kube-api-access-dpbsh\") pod \"nova-cell0-conductor-0\" (UID: \"cc85f464-74b0-41f9-9997-21e67c3c7e3a\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.985203 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc85f464-74b0-41f9-9997-21e67c3c7e3a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cc85f464-74b0-41f9-9997-21e67c3c7e3a\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.993056 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc85f464-74b0-41f9-9997-21e67c3c7e3a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cc85f464-74b0-41f9-9997-21e67c3c7e3a\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:39 crc kubenswrapper[4625]: I1202 14:07:39.995336 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc85f464-74b0-41f9-9997-21e67c3c7e3a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cc85f464-74b0-41f9-9997-21e67c3c7e3a\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:40 crc kubenswrapper[4625]: I1202 14:07:40.018276 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpbsh\" (UniqueName: \"kubernetes.io/projected/cc85f464-74b0-41f9-9997-21e67c3c7e3a-kube-api-access-dpbsh\") pod \"nova-cell0-conductor-0\" (UID: \"cc85f464-74b0-41f9-9997-21e67c3c7e3a\") " pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:40 crc kubenswrapper[4625]: I1202 14:07:40.219464 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:40 crc kubenswrapper[4625]: I1202 14:07:40.761791 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 02 14:07:40 crc kubenswrapper[4625]: W1202 14:07:40.770158 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc85f464_74b0_41f9_9997_21e67c3c7e3a.slice/crio-69cc96d6d165174625fbb42044ccdbd8dc0a4efbf2057426602fa4d4d9783732 WatchSource:0}: Error finding container 69cc96d6d165174625fbb42044ccdbd8dc0a4efbf2057426602fa4d4d9783732: Status 404 returned error can't find the container with id 69cc96d6d165174625fbb42044ccdbd8dc0a4efbf2057426602fa4d4d9783732 Dec 02 14:07:41 crc kubenswrapper[4625]: I1202 14:07:41.744469 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cc85f464-74b0-41f9-9997-21e67c3c7e3a","Type":"ContainerStarted","Data":"1a393ba36690d976b58085724000c34606b6ffa0e97e53fc3a6f9ae454c999b0"} Dec 02 14:07:41 crc kubenswrapper[4625]: I1202 14:07:41.745654 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cc85f464-74b0-41f9-9997-21e67c3c7e3a","Type":"ContainerStarted","Data":"69cc96d6d165174625fbb42044ccdbd8dc0a4efbf2057426602fa4d4d9783732"} Dec 02 14:07:41 crc kubenswrapper[4625]: I1202 14:07:41.745774 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:41 crc kubenswrapper[4625]: I1202 14:07:41.752399 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a55923-8fce-4593-ab7c-5b399ceeddcf","Type":"ContainerStarted","Data":"ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde"} Dec 02 14:07:41 crc kubenswrapper[4625]: I1202 14:07:41.752461 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a55923-8fce-4593-ab7c-5b399ceeddcf","Type":"ContainerStarted","Data":"b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73"} Dec 02 14:07:41 crc kubenswrapper[4625]: I1202 14:07:41.772637 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.772614431 podStartE2EDuration="2.772614431s" podCreationTimestamp="2025-12-02 14:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:41.768240821 +0000 UTC m=+1417.730417896" watchObservedRunningTime="2025-12-02 14:07:41.772614431 +0000 UTC m=+1417.734791506" Dec 02 14:07:44 crc kubenswrapper[4625]: I1202 14:07:44.794392 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a55923-8fce-4593-ab7c-5b399ceeddcf","Type":"ContainerStarted","Data":"2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7"} Dec 02 14:07:44 crc kubenswrapper[4625]: I1202 14:07:44.797240 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.596752 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.449099693 podStartE2EDuration="8.596723257s" podCreationTimestamp="2025-12-02 14:07:37 +0000 UTC" firstStartedPulling="2025-12-02 14:07:38.690953272 +0000 UTC m=+1414.653130347" lastFinishedPulling="2025-12-02 14:07:43.838576836 +0000 UTC m=+1419.800753911" observedRunningTime="2025-12-02 14:07:44.835002142 +0000 UTC m=+1420.797179237" watchObservedRunningTime="2025-12-02 14:07:45.596723257 +0000 UTC m=+1421.558900332" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.607615 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xf8ps"] Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.611766 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.632952 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xf8ps"] Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.781468 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-catalog-content\") pod \"redhat-operators-xf8ps\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.781600 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-utilities\") pod \"redhat-operators-xf8ps\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.781967 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6lkm\" (UniqueName: \"kubernetes.io/projected/eccd8f47-db22-4700-8db7-b5e94abc38ed-kube-api-access-l6lkm\") pod \"redhat-operators-xf8ps\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.884800 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-catalog-content\") pod \"redhat-operators-xf8ps\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.884902 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-utilities\") pod \"redhat-operators-xf8ps\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.885543 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-catalog-content\") pod \"redhat-operators-xf8ps\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.885552 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6lkm\" (UniqueName: \"kubernetes.io/projected/eccd8f47-db22-4700-8db7-b5e94abc38ed-kube-api-access-l6lkm\") pod \"redhat-operators-xf8ps\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.885748 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-utilities\") pod \"redhat-operators-xf8ps\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.910384 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6lkm\" (UniqueName: \"kubernetes.io/projected/eccd8f47-db22-4700-8db7-b5e94abc38ed-kube-api-access-l6lkm\") pod \"redhat-operators-xf8ps\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:45 crc kubenswrapper[4625]: I1202 14:07:45.969552 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:07:46 crc kubenswrapper[4625]: I1202 14:07:46.096408 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 02 14:07:46 crc kubenswrapper[4625]: I1202 14:07:46.096850 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:07:46 crc kubenswrapper[4625]: I1202 14:07:46.604259 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xf8ps"] Dec 02 14:07:46 crc kubenswrapper[4625]: W1202 14:07:46.609810 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeccd8f47_db22_4700_8db7_b5e94abc38ed.slice/crio-17b98bceb6c067c0bd086363d718f7713994382252df25b0dbbb5e45cefd66fc WatchSource:0}: Error finding container 17b98bceb6c067c0bd086363d718f7713994382252df25b0dbbb5e45cefd66fc: Status 404 returned error can't find the container with id 17b98bceb6c067c0bd086363d718f7713994382252df25b0dbbb5e45cefd66fc Dec 02 14:07:46 crc kubenswrapper[4625]: I1202 14:07:46.820446 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xf8ps" event={"ID":"eccd8f47-db22-4700-8db7-b5e94abc38ed","Type":"ContainerStarted","Data":"17b98bceb6c067c0bd086363d718f7713994382252df25b0dbbb5e45cefd66fc"} Dec 02 14:07:47 crc kubenswrapper[4625]: I1202 14:07:47.850001 4625 generic.go:334] "Generic (PLEG): container finished" podID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerID="770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88" exitCode=0 Dec 02 14:07:47 crc kubenswrapper[4625]: I1202 14:07:47.850377 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xf8ps" event={"ID":"eccd8f47-db22-4700-8db7-b5e94abc38ed","Type":"ContainerDied","Data":"770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88"} Dec 02 14:07:49 crc kubenswrapper[4625]: I1202 14:07:49.271377 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:07:49 crc kubenswrapper[4625]: I1202 14:07:49.273703 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:07:49 crc kubenswrapper[4625]: I1202 14:07:49.886142 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xf8ps" event={"ID":"eccd8f47-db22-4700-8db7-b5e94abc38ed","Type":"ContainerStarted","Data":"2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0"} Dec 02 14:07:50 crc kubenswrapper[4625]: I1202 14:07:50.265014 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.055514 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-p9qz6"] Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.058364 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.063000 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.064445 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.076798 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-p9qz6"] Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.245520 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-scripts\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.246630 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ts7\" (UniqueName: \"kubernetes.io/projected/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-kube-api-access-k7ts7\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.246946 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-config-data\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.247087 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.349956 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-scripts\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.352522 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7ts7\" (UniqueName: \"kubernetes.io/projected/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-kube-api-access-k7ts7\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.352851 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-config-data\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.352958 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.396221 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.396575 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-config-data\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.399550 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-scripts\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.408993 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7ts7\" (UniqueName: \"kubernetes.io/projected/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-kube-api-access-k7ts7\") pod \"nova-cell0-cell-mapping-p9qz6\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.690820 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.824630 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.826938 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.858036 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.960435 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.964237 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.969433 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgmbc\" (UniqueName: \"kubernetes.io/projected/e40e6e8b-feb1-4a80-a962-bfad2645f094-kube-api-access-xgmbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.969846 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.969943 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.978693 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:07:51 crc kubenswrapper[4625]: I1202 14:07:51.980965 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.037031 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.083704 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgmbc\" (UniqueName: \"kubernetes.io/projected/e40e6e8b-feb1-4a80-a962-bfad2645f094-kube-api-access-xgmbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.085192 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.085387 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhdrm\" (UniqueName: \"kubernetes.io/projected/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-kube-api-access-rhdrm\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.085503 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-config-data\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.085624 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.092508 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.093649 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-logs\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.148090 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.150930 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.188889 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.194547 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgmbc\" (UniqueName: \"kubernetes.io/projected/e40e6e8b-feb1-4a80-a962-bfad2645f094-kube-api-access-xgmbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.195218 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.196180 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhdrm\" (UniqueName: \"kubernetes.io/projected/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-kube-api-access-rhdrm\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.196234 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-config-data\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.214613 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-logs\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.214868 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.216078 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.222165 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-logs\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.223417 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.263083 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.276661 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-config-data\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.281971 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhdrm\" (UniqueName: \"kubernetes.io/projected/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-kube-api-access-rhdrm\") pod \"nova-metadata-0\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.328727 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.329148 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.329528 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.329657 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bggt\" (UniqueName: \"kubernetes.io/projected/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-kube-api-access-6bggt\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.329701 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-logs\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.329819 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-config-data\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.435986 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.436091 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bggt\" (UniqueName: \"kubernetes.io/projected/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-kube-api-access-6bggt\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.436140 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-logs\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.436197 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-config-data\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.453157 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-logs\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.470113 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.487215 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-config-data\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.548440 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bggt\" (UniqueName: \"kubernetes.io/projected/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-kube-api-access-6bggt\") pod \"nova-api-0\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.584130 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.645155 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sfhvx"] Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.647651 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.712544 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.735550 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.735663 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-svc\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.735720 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.739695 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sfhvx"] Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.746157 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-config\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.747196 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvb8\" (UniqueName: \"kubernetes.io/projected/56f84277-4e5d-4772-9da9-f9bd1aa4e637-kube-api-access-5bvb8\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.854285 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.854425 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-svc\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.854466 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.854538 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-config\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.854695 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvb8\" (UniqueName: \"kubernetes.io/projected/56f84277-4e5d-4772-9da9-f9bd1aa4e637-kube-api-access-5bvb8\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.854855 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.856293 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.856811 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.859182 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-config\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.875826 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:52 crc kubenswrapper[4625]: I1202 14:07:52.876459 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-svc\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.094231 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.133074 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.141936 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.154964 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvb8\" (UniqueName: \"kubernetes.io/projected/56f84277-4e5d-4772-9da9-f9bd1aa4e637-kube-api-access-5bvb8\") pod \"dnsmasq-dns-757b4f8459-sfhvx\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.219106 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.219170 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9jlf\" (UniqueName: \"kubernetes.io/projected/7af36017-bcb5-46cb-a07d-d65dd6152f6f-kube-api-access-p9jlf\") pod \"nova-scheduler-0\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.226252 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.227293 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-config-data\") pod \"nova-scheduler-0\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.254414 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-p9qz6"] Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.329930 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.329992 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9jlf\" (UniqueName: \"kubernetes.io/projected/7af36017-bcb5-46cb-a07d-d65dd6152f6f-kube-api-access-p9jlf\") pod \"nova-scheduler-0\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.330051 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-config-data\") pod \"nova-scheduler-0\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.342148 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-config-data\") pod \"nova-scheduler-0\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.356650 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9jlf\" (UniqueName: \"kubernetes.io/projected/7af36017-bcb5-46cb-a07d-d65dd6152f6f-kube-api-access-p9jlf\") pod \"nova-scheduler-0\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.364702 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.385864 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.466033 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.862062 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:07:53 crc kubenswrapper[4625]: I1202 14:07:53.942706 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:07:54 crc kubenswrapper[4625]: I1202 14:07:54.018444 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:07:54 crc kubenswrapper[4625]: I1202 14:07:54.178171 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p9qz6" event={"ID":"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab","Type":"ContainerStarted","Data":"39250b2503defc51b13bd509ccf8e27069ce19cbed405cd56daef5d9c68b22fc"} Dec 02 14:07:54 crc kubenswrapper[4625]: I1202 14:07:54.180758 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e40e6e8b-feb1-4a80-a962-bfad2645f094","Type":"ContainerStarted","Data":"cd4f1e559a92e75217a95314ffc831f16d4934fb96d63a256e2a4c323220807f"} Dec 02 14:07:54 crc kubenswrapper[4625]: I1202 14:07:54.198136 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3","Type":"ContainerStarted","Data":"4c90cf55136bb8a83ee22c044515738fc00122279b075dbaba6d0652d048cf20"} Dec 02 14:07:54 crc kubenswrapper[4625]: I1202 14:07:54.217523 4625 generic.go:334] "Generic (PLEG): container finished" podID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerID="2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0" exitCode=0 Dec 02 14:07:54 crc kubenswrapper[4625]: I1202 14:07:54.217662 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xf8ps" event={"ID":"eccd8f47-db22-4700-8db7-b5e94abc38ed","Type":"ContainerDied","Data":"2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0"} Dec 02 14:07:54 crc kubenswrapper[4625]: I1202 14:07:54.228422 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e26075-b67d-4f81-bfdb-5fc40347fe4f","Type":"ContainerStarted","Data":"13933fda5801551540e54328989e08ddbb389f495b18a35cb072bfb3bf99ef59"} Dec 02 14:07:54 crc kubenswrapper[4625]: I1202 14:07:54.543548 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sfhvx"] Dec 02 14:07:54 crc kubenswrapper[4625]: I1202 14:07:54.741529 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.354539 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p9qz6" event={"ID":"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab","Type":"ContainerStarted","Data":"3a156d4ab635c2ca274d781e865910d9221c487f02a47308f3ac022ff72e8181"} Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.361140 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" event={"ID":"56f84277-4e5d-4772-9da9-f9bd1aa4e637","Type":"ContainerStarted","Data":"fb94520465d8661b61ab469ac30be281ef502942140291510c25515011d75744"} Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.380529 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7af36017-bcb5-46cb-a07d-d65dd6152f6f","Type":"ContainerStarted","Data":"41f9619bf04a9d3f8c240e1d02c350259c9487023ade9dcefc26043284f259f0"} Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.381974 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-p9qz6" podStartSLOduration=4.3819605169999996 podStartE2EDuration="4.381960517s" podCreationTimestamp="2025-12-02 14:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:55.381157527 +0000 UTC m=+1431.343334602" watchObservedRunningTime="2025-12-02 14:07:55.381960517 +0000 UTC m=+1431.344137592" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.668579 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sgm2m"] Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.686178 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.690251 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.700826 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.711805 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sgm2m"] Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.747044 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddt6k\" (UniqueName: \"kubernetes.io/projected/5b257027-4621-495c-b675-99d14b598340-kube-api-access-ddt6k\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.747117 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-scripts\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.747150 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.747206 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-config-data\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.866044 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-scripts\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.866196 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.866474 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-config-data\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.866903 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddt6k\" (UniqueName: \"kubernetes.io/projected/5b257027-4621-495c-b675-99d14b598340-kube-api-access-ddt6k\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.884607 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-scripts\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.917057 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddt6k\" (UniqueName: \"kubernetes.io/projected/5b257027-4621-495c-b675-99d14b598340-kube-api-access-ddt6k\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.917965 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-config-data\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:55 crc kubenswrapper[4625]: I1202 14:07:55.924452 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sgm2m\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.038842 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.225659 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.435607 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-tls-certs\") pod \"04b6d9a8-9eed-441e-a627-83774df65ed9\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.436060 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btcvz\" (UniqueName: \"kubernetes.io/projected/04b6d9a8-9eed-441e-a627-83774df65ed9-kube-api-access-btcvz\") pod \"04b6d9a8-9eed-441e-a627-83774df65ed9\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.436118 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-secret-key\") pod \"04b6d9a8-9eed-441e-a627-83774df65ed9\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.436151 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-combined-ca-bundle\") pod \"04b6d9a8-9eed-441e-a627-83774df65ed9\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.436181 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b6d9a8-9eed-441e-a627-83774df65ed9-logs\") pod \"04b6d9a8-9eed-441e-a627-83774df65ed9\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.436206 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-scripts\") pod \"04b6d9a8-9eed-441e-a627-83774df65ed9\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.436240 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-config-data\") pod \"04b6d9a8-9eed-441e-a627-83774df65ed9\" (UID: \"04b6d9a8-9eed-441e-a627-83774df65ed9\") " Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.439802 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b6d9a8-9eed-441e-a627-83774df65ed9-logs" (OuterVolumeSpecName: "logs") pod "04b6d9a8-9eed-441e-a627-83774df65ed9" (UID: "04b6d9a8-9eed-441e-a627-83774df65ed9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.474966 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "04b6d9a8-9eed-441e-a627-83774df65ed9" (UID: "04b6d9a8-9eed-441e-a627-83774df65ed9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.545225 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b6d9a8-9eed-441e-a627-83774df65ed9-kube-api-access-btcvz" (OuterVolumeSpecName: "kube-api-access-btcvz") pod "04b6d9a8-9eed-441e-a627-83774df65ed9" (UID: "04b6d9a8-9eed-441e-a627-83774df65ed9"). InnerVolumeSpecName "kube-api-access-btcvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.548916 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btcvz\" (UniqueName: \"kubernetes.io/projected/04b6d9a8-9eed-441e-a627-83774df65ed9-kube-api-access-btcvz\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.548964 4625 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.548974 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b6d9a8-9eed-441e-a627-83774df65ed9-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.611279 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-scripts" (OuterVolumeSpecName: "scripts") pod "04b6d9a8-9eed-441e-a627-83774df65ed9" (UID: "04b6d9a8-9eed-441e-a627-83774df65ed9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.611520 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xf8ps" event={"ID":"eccd8f47-db22-4700-8db7-b5e94abc38ed","Type":"ContainerStarted","Data":"0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c"} Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.653035 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.654143 4625 generic.go:334] "Generic (PLEG): container finished" podID="56f84277-4e5d-4772-9da9-f9bd1aa4e637" containerID="c40783744d8c9d77fa8a47285c0f9b9b3ba2f598f7deb415b74d37f1caaa28c4" exitCode=0 Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.654253 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" event={"ID":"56f84277-4e5d-4772-9da9-f9bd1aa4e637","Type":"ContainerDied","Data":"c40783744d8c9d77fa8a47285c0f9b9b3ba2f598f7deb415b74d37f1caaa28c4"} Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.700495 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "04b6d9a8-9eed-441e-a627-83774df65ed9" (UID: "04b6d9a8-9eed-441e-a627-83774df65ed9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.727794 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xf8ps" podStartSLOduration=4.720918066 podStartE2EDuration="11.727758186s" podCreationTimestamp="2025-12-02 14:07:45 +0000 UTC" firstStartedPulling="2025-12-02 14:07:47.8655081 +0000 UTC m=+1423.827685185" lastFinishedPulling="2025-12-02 14:07:54.87234823 +0000 UTC m=+1430.834525305" observedRunningTime="2025-12-02 14:07:56.64993522 +0000 UTC m=+1432.612112295" watchObservedRunningTime="2025-12-02 14:07:56.727758186 +0000 UTC m=+1432.689935261" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.747188 4625 generic.go:334] "Generic (PLEG): container finished" podID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerID="c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805" exitCode=137 Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.747887 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c94877878-jvhxv" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.751226 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c94877878-jvhxv" event={"ID":"04b6d9a8-9eed-441e-a627-83774df65ed9","Type":"ContainerDied","Data":"c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805"} Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.751302 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c94877878-jvhxv" event={"ID":"04b6d9a8-9eed-441e-a627-83774df65ed9","Type":"ContainerDied","Data":"e6332762322b607966e6e5a32e31d491921997bba804b1194d69770519487cc9"} Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.751352 4625 scope.go:117] "RemoveContainer" containerID="0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.764282 4625 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.807721 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-config-data" (OuterVolumeSpecName: "config-data") pod "04b6d9a8-9eed-441e-a627-83774df65ed9" (UID: "04b6d9a8-9eed-441e-a627-83774df65ed9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.830920 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04b6d9a8-9eed-441e-a627-83774df65ed9" (UID: "04b6d9a8-9eed-441e-a627-83774df65ed9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.866981 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b6d9a8-9eed-441e-a627-83774df65ed9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:56 crc kubenswrapper[4625]: I1202 14:07:56.867017 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/04b6d9a8-9eed-441e-a627-83774df65ed9-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:07:57 crc kubenswrapper[4625]: I1202 14:07:57.069294 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sgm2m"] Dec 02 14:07:57 crc kubenswrapper[4625]: I1202 14:07:57.101339 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c94877878-jvhxv"] Dec 02 14:07:57 crc kubenswrapper[4625]: I1202 14:07:57.116065 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c94877878-jvhxv"] Dec 02 14:07:57 crc kubenswrapper[4625]: I1202 14:07:57.120493 4625 scope.go:117] "RemoveContainer" containerID="c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805" Dec 02 14:07:57 crc kubenswrapper[4625]: W1202 14:07:57.155926 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b257027_4621_495c_b675_99d14b598340.slice/crio-1d3c54e786c1cbb9e8c29ce28d8e657c624790b7b8deecd87be403789b096f32 WatchSource:0}: Error finding container 1d3c54e786c1cbb9e8c29ce28d8e657c624790b7b8deecd87be403789b096f32: Status 404 returned error can't find the container with id 1d3c54e786c1cbb9e8c29ce28d8e657c624790b7b8deecd87be403789b096f32 Dec 02 14:07:57 crc kubenswrapper[4625]: I1202 14:07:57.218992 4625 scope.go:117] "RemoveContainer" containerID="0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47" Dec 02 14:07:57 crc kubenswrapper[4625]: E1202 14:07:57.221708 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47\": container with ID starting with 0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47 not found: ID does not exist" containerID="0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47" Dec 02 14:07:57 crc kubenswrapper[4625]: I1202 14:07:57.221787 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47"} err="failed to get container status \"0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47\": rpc error: code = NotFound desc = could not find container \"0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47\": container with ID starting with 0061a10534f7ca1b235f7c54381a27b61204ac5be16f48128c9ee3a3c6b5ee47 not found: ID does not exist" Dec 02 14:07:57 crc kubenswrapper[4625]: I1202 14:07:57.221825 4625 scope.go:117] "RemoveContainer" containerID="c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805" Dec 02 14:07:57 crc kubenswrapper[4625]: E1202 14:07:57.225169 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805\": container with ID starting with c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805 not found: ID does not exist" containerID="c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805" Dec 02 14:07:57 crc kubenswrapper[4625]: I1202 14:07:57.225275 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805"} err="failed to get container status \"c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805\": rpc error: code = NotFound desc = could not find container \"c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805\": container with ID starting with c7796a4fbf01c822d9a56af51c98b455d177862f046fb3e2704f97fb5f5a4805 not found: ID does not exist" Dec 02 14:07:57 crc kubenswrapper[4625]: I1202 14:07:57.774769 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sgm2m" event={"ID":"5b257027-4621-495c-b675-99d14b598340","Type":"ContainerStarted","Data":"1d3c54e786c1cbb9e8c29ce28d8e657c624790b7b8deecd87be403789b096f32"} Dec 02 14:07:58 crc kubenswrapper[4625]: I1202 14:07:58.818179 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" event={"ID":"56f84277-4e5d-4772-9da9-f9bd1aa4e637","Type":"ContainerStarted","Data":"a77f6d0bc91c16d06ef12ff62b61af474b1bf7666d8c61b04b5717812d035268"} Dec 02 14:07:58 crc kubenswrapper[4625]: I1202 14:07:58.818796 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:07:58 crc kubenswrapper[4625]: I1202 14:07:58.835037 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sgm2m" event={"ID":"5b257027-4621-495c-b675-99d14b598340","Type":"ContainerStarted","Data":"019ed058cf76d2fa5a7eb64b019e033a7a07c402b0949cfe4a52742aa248fd30"} Dec 02 14:07:58 crc kubenswrapper[4625]: I1202 14:07:58.889293 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" podStartSLOduration=6.889255726 podStartE2EDuration="6.889255726s" podCreationTimestamp="2025-12-02 14:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:58.865809638 +0000 UTC m=+1434.827986713" watchObservedRunningTime="2025-12-02 14:07:58.889255726 +0000 UTC m=+1434.851432801" Dec 02 14:07:58 crc kubenswrapper[4625]: I1202 14:07:58.933718 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" path="/var/lib/kubelet/pods/04b6d9a8-9eed-441e-a627-83774df65ed9/volumes" Dec 02 14:07:58 crc kubenswrapper[4625]: I1202 14:07:58.970107 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-sgm2m" podStartSLOduration=3.970071711 podStartE2EDuration="3.970071711s" podCreationTimestamp="2025-12-02 14:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:07:58.92912317 +0000 UTC m=+1434.891300245" watchObservedRunningTime="2025-12-02 14:07:58.970071711 +0000 UTC m=+1434.932248786" Dec 02 14:08:00 crc kubenswrapper[4625]: I1202 14:08:00.115130 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:00 crc kubenswrapper[4625]: I1202 14:08:00.156512 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:08:01 crc kubenswrapper[4625]: I1202 14:08:01.100489 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5c94877878-jvhxv" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: i/o timeout" Dec 02 14:08:03 crc kubenswrapper[4625]: I1202 14:08:03.387693 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:08:03 crc kubenswrapper[4625]: I1202 14:08:03.617734 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lcqjf"] Dec 02 14:08:03 crc kubenswrapper[4625]: I1202 14:08:03.618577 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" podUID="cbaa9675-d1dc-4b23-962a-5607cbacad8d" containerName="dnsmasq-dns" containerID="cri-o://486163bcae764c6e4412db6f131005b340e165fd246b56105767872ce0ec7db0" gracePeriod=10 Dec 02 14:08:03 crc kubenswrapper[4625]: I1202 14:08:03.948150 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e26075-b67d-4f81-bfdb-5fc40347fe4f","Type":"ContainerStarted","Data":"cf754b89d7814a0505b4d2c762d409b0a876d6b7b771058e252ba260db211ac8"} Dec 02 14:08:03 crc kubenswrapper[4625]: I1202 14:08:03.998641 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7af36017-bcb5-46cb-a07d-d65dd6152f6f","Type":"ContainerStarted","Data":"cdb762fb5d41ad81cfb0a1769b50fe224d7248e7f75f54edc6d023f6820a4e1e"} Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.025258 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e40e6e8b-feb1-4a80-a962-bfad2645f094","Type":"ContainerStarted","Data":"a726198fe6d4784095c28bf76cc4768e8e333b70379c2a78b554fd47d6316b86"} Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.025576 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e40e6e8b-feb1-4a80-a962-bfad2645f094" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a726198fe6d4784095c28bf76cc4768e8e333b70379c2a78b554fd47d6316b86" gracePeriod=30 Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.038485 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.248363473 podStartE2EDuration="12.038451078s" podCreationTimestamp="2025-12-02 14:07:52 +0000 UTC" firstStartedPulling="2025-12-02 14:07:54.784719809 +0000 UTC m=+1430.746896884" lastFinishedPulling="2025-12-02 14:08:02.574807414 +0000 UTC m=+1438.536984489" observedRunningTime="2025-12-02 14:08:04.026214951 +0000 UTC m=+1439.988392026" watchObservedRunningTime="2025-12-02 14:08:04.038451078 +0000 UTC m=+1440.000628153" Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.075802 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3","Type":"ContainerStarted","Data":"20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a"} Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.075883 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3","Type":"ContainerStarted","Data":"0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c"} Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.104990 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.562333473 podStartE2EDuration="13.104966422s" podCreationTimestamp="2025-12-02 14:07:51 +0000 UTC" firstStartedPulling="2025-12-02 14:07:54.035476521 +0000 UTC m=+1429.997653596" lastFinishedPulling="2025-12-02 14:08:02.57810946 +0000 UTC m=+1438.540286545" observedRunningTime="2025-12-02 14:08:04.070494519 +0000 UTC m=+1440.032671594" watchObservedRunningTime="2025-12-02 14:08:04.104966422 +0000 UTC m=+1440.067143497" Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.105791 4625 generic.go:334] "Generic (PLEG): container finished" podID="cbaa9675-d1dc-4b23-962a-5607cbacad8d" containerID="486163bcae764c6e4412db6f131005b340e165fd246b56105767872ce0ec7db0" exitCode=0 Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.105870 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" event={"ID":"cbaa9675-d1dc-4b23-962a-5607cbacad8d","Type":"ContainerDied","Data":"486163bcae764c6e4412db6f131005b340e165fd246b56105767872ce0ec7db0"} Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.109678 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.612975305 podStartE2EDuration="12.109669574s" podCreationTimestamp="2025-12-02 14:07:52 +0000 UTC" firstStartedPulling="2025-12-02 14:07:54.077609333 +0000 UTC m=+1430.039786408" lastFinishedPulling="2025-12-02 14:08:02.574303602 +0000 UTC m=+1438.536480677" observedRunningTime="2025-12-02 14:08:04.100293031 +0000 UTC m=+1440.062470096" watchObservedRunningTime="2025-12-02 14:08:04.109669574 +0000 UTC m=+1440.071846649" Dec 02 14:08:04 crc kubenswrapper[4625]: E1202 14:08:04.218435 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbaa9675_d1dc_4b23_962a_5607cbacad8d.slice/crio-486163bcae764c6e4412db6f131005b340e165fd246b56105767872ce0ec7db0.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.691854 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.804684 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4cz8\" (UniqueName: \"kubernetes.io/projected/cbaa9675-d1dc-4b23-962a-5607cbacad8d-kube-api-access-g4cz8\") pod \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.804766 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-nb\") pod \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.804844 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-svc\") pod \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.805087 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-config\") pod \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.805714 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-swift-storage-0\") pod \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.805776 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-sb\") pod \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\" (UID: \"cbaa9675-d1dc-4b23-962a-5607cbacad8d\") " Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.832782 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbaa9675-d1dc-4b23-962a-5607cbacad8d-kube-api-access-g4cz8" (OuterVolumeSpecName: "kube-api-access-g4cz8") pod "cbaa9675-d1dc-4b23-962a-5607cbacad8d" (UID: "cbaa9675-d1dc-4b23-962a-5607cbacad8d"). InnerVolumeSpecName "kube-api-access-g4cz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.914257 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4cz8\" (UniqueName: \"kubernetes.io/projected/cbaa9675-d1dc-4b23-962a-5607cbacad8d-kube-api-access-g4cz8\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.955209 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cbaa9675-d1dc-4b23-962a-5607cbacad8d" (UID: "cbaa9675-d1dc-4b23-962a-5607cbacad8d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.978516 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cbaa9675-d1dc-4b23-962a-5607cbacad8d" (UID: "cbaa9675-d1dc-4b23-962a-5607cbacad8d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.984413 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cbaa9675-d1dc-4b23-962a-5607cbacad8d" (UID: "cbaa9675-d1dc-4b23-962a-5607cbacad8d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:08:04 crc kubenswrapper[4625]: I1202 14:08:04.999204 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-config" (OuterVolumeSpecName: "config") pod "cbaa9675-d1dc-4b23-962a-5607cbacad8d" (UID: "cbaa9675-d1dc-4b23-962a-5607cbacad8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.020091 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.020431 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.020513 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.020576 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.046134 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cbaa9675-d1dc-4b23-962a-5607cbacad8d" (UID: "cbaa9675-d1dc-4b23-962a-5607cbacad8d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.119896 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.119915 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lcqjf" event={"ID":"cbaa9675-d1dc-4b23-962a-5607cbacad8d","Type":"ContainerDied","Data":"e1227cf931738827ade4b73fb18e0ad558a712110f44343186d49884234bd4b5"} Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.120090 4625 scope.go:117] "RemoveContainer" containerID="486163bcae764c6e4412db6f131005b340e165fd246b56105767872ce0ec7db0" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.122879 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" containerName="nova-metadata-log" containerID="cri-o://cf754b89d7814a0505b4d2c762d409b0a876d6b7b771058e252ba260db211ac8" gracePeriod=30 Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.123275 4625 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbaa9675-d1dc-4b23-962a-5607cbacad8d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.123369 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" containerName="nova-metadata-metadata" containerID="cri-o://43cd9320d48f3f0a0c82841d52cfab05288d1de1eba3ba3eb5ad5788e51ead70" gracePeriod=30 Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.123467 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e26075-b67d-4f81-bfdb-5fc40347fe4f","Type":"ContainerStarted","Data":"43cd9320d48f3f0a0c82841d52cfab05288d1de1eba3ba3eb5ad5788e51ead70"} Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.168800 4625 scope.go:117] "RemoveContainer" containerID="3ab622c63cde7a273919e28d09e24c66203621f466f62b504e734d5b7b231015" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.189241 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.554765433 podStartE2EDuration="14.189186542s" podCreationTimestamp="2025-12-02 14:07:51 +0000 UTC" firstStartedPulling="2025-12-02 14:07:53.977709243 +0000 UTC m=+1429.939886318" lastFinishedPulling="2025-12-02 14:08:02.612130352 +0000 UTC m=+1438.574307427" observedRunningTime="2025-12-02 14:08:05.169721508 +0000 UTC m=+1441.131898583" watchObservedRunningTime="2025-12-02 14:08:05.189186542 +0000 UTC m=+1441.151363617" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.229891 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lcqjf"] Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.312162 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lcqjf"] Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.970726 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:08:05 crc kubenswrapper[4625]: I1202 14:08:05.971153 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.159444 4625 generic.go:334] "Generic (PLEG): container finished" podID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" containerID="43cd9320d48f3f0a0c82841d52cfab05288d1de1eba3ba3eb5ad5788e51ead70" exitCode=0 Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.159502 4625 generic.go:334] "Generic (PLEG): container finished" podID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" containerID="cf754b89d7814a0505b4d2c762d409b0a876d6b7b771058e252ba260db211ac8" exitCode=143 Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.159559 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e26075-b67d-4f81-bfdb-5fc40347fe4f","Type":"ContainerDied","Data":"43cd9320d48f3f0a0c82841d52cfab05288d1de1eba3ba3eb5ad5788e51ead70"} Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.159631 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e26075-b67d-4f81-bfdb-5fc40347fe4f","Type":"ContainerDied","Data":"cf754b89d7814a0505b4d2c762d409b0a876d6b7b771058e252ba260db211ac8"} Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.410151 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.513547 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-combined-ca-bundle\") pod \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.513901 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-config-data\") pod \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.514115 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhdrm\" (UniqueName: \"kubernetes.io/projected/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-kube-api-access-rhdrm\") pod \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.514181 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-logs\") pod \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\" (UID: \"e4e26075-b67d-4f81-bfdb-5fc40347fe4f\") " Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.515120 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-logs" (OuterVolumeSpecName: "logs") pod "e4e26075-b67d-4f81-bfdb-5fc40347fe4f" (UID: "e4e26075-b67d-4f81-bfdb-5fc40347fe4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.577837 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-kube-api-access-rhdrm" (OuterVolumeSpecName: "kube-api-access-rhdrm") pod "e4e26075-b67d-4f81-bfdb-5fc40347fe4f" (UID: "e4e26075-b67d-4f81-bfdb-5fc40347fe4f"). InnerVolumeSpecName "kube-api-access-rhdrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.617295 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhdrm\" (UniqueName: \"kubernetes.io/projected/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-kube-api-access-rhdrm\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.617345 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.624578 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4e26075-b67d-4f81-bfdb-5fc40347fe4f" (UID: "e4e26075-b67d-4f81-bfdb-5fc40347fe4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.659653 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-config-data" (OuterVolumeSpecName: "config-data") pod "e4e26075-b67d-4f81-bfdb-5fc40347fe4f" (UID: "e4e26075-b67d-4f81-bfdb-5fc40347fe4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.719051 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.719098 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e26075-b67d-4f81-bfdb-5fc40347fe4f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:06 crc kubenswrapper[4625]: I1202 14:08:06.879265 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbaa9675-d1dc-4b23-962a-5607cbacad8d" path="/var/lib/kubelet/pods/cbaa9675-d1dc-4b23-962a-5607cbacad8d/volumes" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.180341 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e26075-b67d-4f81-bfdb-5fc40347fe4f","Type":"ContainerDied","Data":"13933fda5801551540e54328989e08ddbb389f495b18a35cb072bfb3bf99ef59"} Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.180456 4625 scope.go:117] "RemoveContainer" containerID="43cd9320d48f3f0a0c82841d52cfab05288d1de1eba3ba3eb5ad5788e51ead70" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.181577 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.214412 4625 scope.go:117] "RemoveContainer" containerID="cf754b89d7814a0505b4d2c762d409b0a876d6b7b771058e252ba260db211ac8" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.220090 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.225197 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.242486 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xf8ps" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerName="registry-server" probeResult="failure" output=< Dec 02 14:08:07 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 14:08:07 crc kubenswrapper[4625]: > Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.256120 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.267190 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:07 crc kubenswrapper[4625]: E1202 14:08:07.268047 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" containerName="nova-metadata-metadata" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268080 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" containerName="nova-metadata-metadata" Dec 02 14:08:07 crc kubenswrapper[4625]: E1202 14:08:07.268105 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268114 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" Dec 02 14:08:07 crc kubenswrapper[4625]: E1202 14:08:07.268125 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbaa9675-d1dc-4b23-962a-5607cbacad8d" containerName="dnsmasq-dns" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268133 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbaa9675-d1dc-4b23-962a-5607cbacad8d" containerName="dnsmasq-dns" Dec 02 14:08:07 crc kubenswrapper[4625]: E1202 14:08:07.268162 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268169 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" Dec 02 14:08:07 crc kubenswrapper[4625]: E1202 14:08:07.268194 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" containerName="nova-metadata-log" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268204 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" containerName="nova-metadata-log" Dec 02 14:08:07 crc kubenswrapper[4625]: E1202 14:08:07.268220 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon-log" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268232 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon-log" Dec 02 14:08:07 crc kubenswrapper[4625]: E1202 14:08:07.268245 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbaa9675-d1dc-4b23-962a-5607cbacad8d" containerName="init" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268252 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbaa9675-d1dc-4b23-962a-5607cbacad8d" containerName="init" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268636 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbaa9675-d1dc-4b23-962a-5607cbacad8d" containerName="dnsmasq-dns" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268660 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268676 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" containerName="nova-metadata-log" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268697 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon-log" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.268709 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" containerName="nova-metadata-metadata" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.269140 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b6d9a8-9eed-441e-a627-83774df65ed9" containerName="horizon" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.270296 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.294882 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.295754 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.315530 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.333237 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwrd7\" (UniqueName: \"kubernetes.io/projected/ed348874-4978-434a-bc74-c58f1acc7c05-kube-api-access-zwrd7\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.333300 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.333410 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-config-data\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.333448 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed348874-4978-434a-bc74-c58f1acc7c05-logs\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.333488 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.435171 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.435274 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwrd7\" (UniqueName: \"kubernetes.io/projected/ed348874-4978-434a-bc74-c58f1acc7c05-kube-api-access-zwrd7\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.435333 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.435423 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-config-data\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.435474 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed348874-4978-434a-bc74-c58f1acc7c05-logs\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.436143 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed348874-4978-434a-bc74-c58f1acc7c05-logs\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.443966 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-config-data\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.448941 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.458264 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.459486 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwrd7\" (UniqueName: \"kubernetes.io/projected/ed348874-4978-434a-bc74-c58f1acc7c05-kube-api-access-zwrd7\") pod \"nova-metadata-0\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " pod="openstack/nova-metadata-0" Dec 02 14:08:07 crc kubenswrapper[4625]: I1202 14:08:07.594083 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:08:08 crc kubenswrapper[4625]: I1202 14:08:08.143279 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 14:08:08 crc kubenswrapper[4625]: I1202 14:08:08.320543 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:08 crc kubenswrapper[4625]: I1202 14:08:08.466506 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 14:08:08 crc kubenswrapper[4625]: I1202 14:08:08.888409 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e26075-b67d-4f81-bfdb-5fc40347fe4f" path="/var/lib/kubelet/pods/e4e26075-b67d-4f81-bfdb-5fc40347fe4f/volumes" Dec 02 14:08:09 crc kubenswrapper[4625]: I1202 14:08:09.318012 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ed348874-4978-434a-bc74-c58f1acc7c05","Type":"ContainerStarted","Data":"f2ffae93c2102d8e8b295fa30f05ab98177b597f49654457f11886e9beb71e74"} Dec 02 14:08:09 crc kubenswrapper[4625]: I1202 14:08:09.318085 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ed348874-4978-434a-bc74-c58f1acc7c05","Type":"ContainerStarted","Data":"03ca74cc18c72cefe25b00917108eb5ffcfe9ed447f7cef16c8474fe12aa1405"} Dec 02 14:08:09 crc kubenswrapper[4625]: I1202 14:08:09.318097 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ed348874-4978-434a-bc74-c58f1acc7c05","Type":"ContainerStarted","Data":"c56a65fe38cb734998f7fe0bbc5e1f6a7e3f36269711853856cd69f60cc48572"} Dec 02 14:08:09 crc kubenswrapper[4625]: I1202 14:08:09.343229 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.343206802 podStartE2EDuration="2.343206802s" podCreationTimestamp="2025-12-02 14:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:08:09.340734318 +0000 UTC m=+1445.302911393" watchObservedRunningTime="2025-12-02 14:08:09.343206802 +0000 UTC m=+1445.305383887" Dec 02 14:08:12 crc kubenswrapper[4625]: I1202 14:08:12.746612 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:08:12 crc kubenswrapper[4625]: I1202 14:08:12.750550 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:08:12 crc kubenswrapper[4625]: I1202 14:08:12.750600 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:08:12 crc kubenswrapper[4625]: I1202 14:08:12.750616 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:08:13 crc kubenswrapper[4625]: I1202 14:08:13.466687 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 14:08:13 crc kubenswrapper[4625]: I1202 14:08:13.528897 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 14:08:13 crc kubenswrapper[4625]: I1202 14:08:13.835530 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:08:13 crc kubenswrapper[4625]: I1202 14:08:13.835883 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:08:14 crc kubenswrapper[4625]: I1202 14:08:14.424839 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 14:08:16 crc kubenswrapper[4625]: I1202 14:08:16.239423 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:08:16 crc kubenswrapper[4625]: I1202 14:08:16.240569 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="42709959-9e14-4b40-8ae8-813bf5e41d5c" containerName="kube-state-metrics" containerID="cri-o://a57f16e865835e2132feea53984f85f5298955487893b60dee476939732657b3" gracePeriod=30 Dec 02 14:08:16 crc kubenswrapper[4625]: I1202 14:08:16.404093 4625 generic.go:334] "Generic (PLEG): container finished" podID="42709959-9e14-4b40-8ae8-813bf5e41d5c" containerID="a57f16e865835e2132feea53984f85f5298955487893b60dee476939732657b3" exitCode=2 Dec 02 14:08:16 crc kubenswrapper[4625]: I1202 14:08:16.404162 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42709959-9e14-4b40-8ae8-813bf5e41d5c","Type":"ContainerDied","Data":"a57f16e865835e2132feea53984f85f5298955487893b60dee476939732657b3"} Dec 02 14:08:16 crc kubenswrapper[4625]: I1202 14:08:16.407238 4625 generic.go:334] "Generic (PLEG): container finished" podID="3cbe18c7-0a3f-4a85-91fe-79dd6af095ab" containerID="3a156d4ab635c2ca274d781e865910d9221c487f02a47308f3ac022ff72e8181" exitCode=0 Dec 02 14:08:16 crc kubenswrapper[4625]: I1202 14:08:16.407386 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p9qz6" event={"ID":"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab","Type":"ContainerDied","Data":"3a156d4ab635c2ca274d781e865910d9221c487f02a47308f3ac022ff72e8181"} Dec 02 14:08:16 crc kubenswrapper[4625]: I1202 14:08:16.410146 4625 generic.go:334] "Generic (PLEG): container finished" podID="5b257027-4621-495c-b675-99d14b598340" containerID="019ed058cf76d2fa5a7eb64b019e033a7a07c402b0949cfe4a52742aa248fd30" exitCode=0 Dec 02 14:08:16 crc kubenswrapper[4625]: I1202 14:08:16.410237 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sgm2m" event={"ID":"5b257027-4621-495c-b675-99d14b598340","Type":"ContainerDied","Data":"019ed058cf76d2fa5a7eb64b019e033a7a07c402b0949cfe4a52742aa248fd30"} Dec 02 14:08:16 crc kubenswrapper[4625]: I1202 14:08:16.964426 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.053763 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xf8ps" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerName="registry-server" probeResult="failure" output=< Dec 02 14:08:17 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 14:08:17 crc kubenswrapper[4625]: > Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.154586 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddz9d\" (UniqueName: \"kubernetes.io/projected/42709959-9e14-4b40-8ae8-813bf5e41d5c-kube-api-access-ddz9d\") pod \"42709959-9e14-4b40-8ae8-813bf5e41d5c\" (UID: \"42709959-9e14-4b40-8ae8-813bf5e41d5c\") " Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.164562 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42709959-9e14-4b40-8ae8-813bf5e41d5c-kube-api-access-ddz9d" (OuterVolumeSpecName: "kube-api-access-ddz9d") pod "42709959-9e14-4b40-8ae8-813bf5e41d5c" (UID: "42709959-9e14-4b40-8ae8-813bf5e41d5c"). InnerVolumeSpecName "kube-api-access-ddz9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.258231 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddz9d\" (UniqueName: \"kubernetes.io/projected/42709959-9e14-4b40-8ae8-813bf5e41d5c-kube-api-access-ddz9d\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.423544 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.429700 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42709959-9e14-4b40-8ae8-813bf5e41d5c","Type":"ContainerDied","Data":"ff63ce3e5ea4acf27d01488072948e972296c218e74cef963c5c634ac876e250"} Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.429837 4625 scope.go:117] "RemoveContainer" containerID="a57f16e865835e2132feea53984f85f5298955487893b60dee476939732657b3" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.500972 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.517163 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.541450 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:08:17 crc kubenswrapper[4625]: E1202 14:08:17.542570 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42709959-9e14-4b40-8ae8-813bf5e41d5c" containerName="kube-state-metrics" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.542695 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="42709959-9e14-4b40-8ae8-813bf5e41d5c" containerName="kube-state-metrics" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.543062 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="42709959-9e14-4b40-8ae8-813bf5e41d5c" containerName="kube-state-metrics" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.544157 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.550697 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.552113 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.550954 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.594476 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.594915 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.697665 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ec415c-0f48-4b5b-98f6-6f854c2910ee-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.698159 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/34ec415c-0f48-4b5b-98f6-6f854c2910ee-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.698223 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/34ec415c-0f48-4b5b-98f6-6f854c2910ee-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.698289 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pdnz\" (UniqueName: \"kubernetes.io/projected/34ec415c-0f48-4b5b-98f6-6f854c2910ee-kube-api-access-7pdnz\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.799444 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ec415c-0f48-4b5b-98f6-6f854c2910ee-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.799545 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/34ec415c-0f48-4b5b-98f6-6f854c2910ee-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.799630 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/34ec415c-0f48-4b5b-98f6-6f854c2910ee-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.799691 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pdnz\" (UniqueName: \"kubernetes.io/projected/34ec415c-0f48-4b5b-98f6-6f854c2910ee-kube-api-access-7pdnz\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.830373 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/34ec415c-0f48-4b5b-98f6-6f854c2910ee-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.875440 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pdnz\" (UniqueName: \"kubernetes.io/projected/34ec415c-0f48-4b5b-98f6-6f854c2910ee-kube-api-access-7pdnz\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.895630 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ec415c-0f48-4b5b-98f6-6f854c2910ee-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:17 crc kubenswrapper[4625]: I1202 14:08:17.903277 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/34ec415c-0f48-4b5b-98f6-6f854c2910ee-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"34ec415c-0f48-4b5b-98f6-6f854c2910ee\") " pod="openstack/kube-state-metrics-0" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.208081 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.283822 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.309002 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddt6k\" (UniqueName: \"kubernetes.io/projected/5b257027-4621-495c-b675-99d14b598340-kube-api-access-ddt6k\") pod \"5b257027-4621-495c-b675-99d14b598340\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.309103 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-scripts\") pod \"5b257027-4621-495c-b675-99d14b598340\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.309155 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-config-data\") pod \"5b257027-4621-495c-b675-99d14b598340\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.309199 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-combined-ca-bundle\") pod \"5b257027-4621-495c-b675-99d14b598340\" (UID: \"5b257027-4621-495c-b675-99d14b598340\") " Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.319634 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-scripts" (OuterVolumeSpecName: "scripts") pod "5b257027-4621-495c-b675-99d14b598340" (UID: "5b257027-4621-495c-b675-99d14b598340"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.328751 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b257027-4621-495c-b675-99d14b598340-kube-api-access-ddt6k" (OuterVolumeSpecName: "kube-api-access-ddt6k") pod "5b257027-4621-495c-b675-99d14b598340" (UID: "5b257027-4621-495c-b675-99d14b598340"). InnerVolumeSpecName "kube-api-access-ddt6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.346085 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.363981 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-config-data" (OuterVolumeSpecName: "config-data") pod "5b257027-4621-495c-b675-99d14b598340" (UID: "5b257027-4621-495c-b675-99d14b598340"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.413697 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7ts7\" (UniqueName: \"kubernetes.io/projected/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-kube-api-access-k7ts7\") pod \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.413973 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-scripts\") pod \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.414036 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-config-data\") pod \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.414142 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-combined-ca-bundle\") pod \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\" (UID: \"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab\") " Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.414722 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.414741 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.414750 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddt6k\" (UniqueName: \"kubernetes.io/projected/5b257027-4621-495c-b675-99d14b598340-kube-api-access-ddt6k\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.435539 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b257027-4621-495c-b675-99d14b598340" (UID: "5b257027-4621-495c-b675-99d14b598340"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.446916 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-kube-api-access-k7ts7" (OuterVolumeSpecName: "kube-api-access-k7ts7") pod "3cbe18c7-0a3f-4a85-91fe-79dd6af095ab" (UID: "3cbe18c7-0a3f-4a85-91fe-79dd6af095ab"). InnerVolumeSpecName "kube-api-access-k7ts7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.460540 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-scripts" (OuterVolumeSpecName: "scripts") pod "3cbe18c7-0a3f-4a85-91fe-79dd6af095ab" (UID: "3cbe18c7-0a3f-4a85-91fe-79dd6af095ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.515887 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.515919 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b257027-4621-495c-b675-99d14b598340-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.515932 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7ts7\" (UniqueName: \"kubernetes.io/projected/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-kube-api-access-k7ts7\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.531813 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cbe18c7-0a3f-4a85-91fe-79dd6af095ab" (UID: "3cbe18c7-0a3f-4a85-91fe-79dd6af095ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.544575 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-config-data" (OuterVolumeSpecName: "config-data") pod "3cbe18c7-0a3f-4a85-91fe-79dd6af095ab" (UID: "3cbe18c7-0a3f-4a85-91fe-79dd6af095ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.554859 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sgm2m" event={"ID":"5b257027-4621-495c-b675-99d14b598340","Type":"ContainerDied","Data":"1d3c54e786c1cbb9e8c29ce28d8e657c624790b7b8deecd87be403789b096f32"} Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.554916 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3c54e786c1cbb9e8c29ce28d8e657c624790b7b8deecd87be403789b096f32" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.555022 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sgm2m" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.604535 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p9qz6" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.605184 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p9qz6" event={"ID":"3cbe18c7-0a3f-4a85-91fe-79dd6af095ab","Type":"ContainerDied","Data":"39250b2503defc51b13bd509ccf8e27069ce19cbed405cd56daef5d9c68b22fc"} Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.605275 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39250b2503defc51b13bd509ccf8e27069ce19cbed405cd56daef5d9c68b22fc" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.635755 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.636095 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.650492 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.651050 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.783602 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2e108301-d560-49b4-a4b2-a2f45c2fa8fd" containerName="galera" probeResult="failure" output="command timed out" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.784015 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 14:08:18 crc kubenswrapper[4625]: E1202 14:08:18.785151 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b257027-4621-495c-b675-99d14b598340" containerName="nova-cell1-conductor-db-sync" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.785179 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b257027-4621-495c-b675-99d14b598340" containerName="nova-cell1-conductor-db-sync" Dec 02 14:08:18 crc kubenswrapper[4625]: E1202 14:08:18.785204 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cbe18c7-0a3f-4a85-91fe-79dd6af095ab" containerName="nova-manage" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.785213 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cbe18c7-0a3f-4a85-91fe-79dd6af095ab" containerName="nova-manage" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.785668 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cbe18c7-0a3f-4a85-91fe-79dd6af095ab" containerName="nova-manage" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.785698 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b257027-4621-495c-b675-99d14b598340" containerName="nova-cell1-conductor-db-sync" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.793940 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.844884 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02f586b-acea-434e-9258-d9cd407b3595-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c02f586b-acea-434e-9258-d9cd407b3595\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.844941 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqq6k\" (UniqueName: \"kubernetes.io/projected/c02f586b-acea-434e-9258-d9cd407b3595-kube-api-access-sqq6k\") pod \"nova-cell1-conductor-0\" (UID: \"c02f586b-acea-434e-9258-d9cd407b3595\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.845073 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02f586b-acea-434e-9258-d9cd407b3595-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c02f586b-acea-434e-9258-d9cd407b3595\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.851104 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.903840 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.952197 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02f586b-acea-434e-9258-d9cd407b3595-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c02f586b-acea-434e-9258-d9cd407b3595\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.960547 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02f586b-acea-434e-9258-d9cd407b3595-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c02f586b-acea-434e-9258-d9cd407b3595\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.960637 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqq6k\" (UniqueName: \"kubernetes.io/projected/c02f586b-acea-434e-9258-d9cd407b3595-kube-api-access-sqq6k\") pod \"nova-cell1-conductor-0\" (UID: \"c02f586b-acea-434e-9258-d9cd407b3595\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.955301 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42709959-9e14-4b40-8ae8-813bf5e41d5c" path="/var/lib/kubelet/pods/42709959-9e14-4b40-8ae8-813bf5e41d5c/volumes" Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.962482 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.962711 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerName="nova-api-log" containerID="cri-o://0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c" gracePeriod=30 Dec 02 14:08:18 crc kubenswrapper[4625]: I1202 14:08:18.964856 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerName="nova-api-api" containerID="cri-o://20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a" gracePeriod=30 Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.017284 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02f586b-acea-434e-9258-d9cd407b3595-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c02f586b-acea-434e-9258-d9cd407b3595\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.018395 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqq6k\" (UniqueName: \"kubernetes.io/projected/c02f586b-acea-434e-9258-d9cd407b3595-kube-api-access-sqq6k\") pod \"nova-cell1-conductor-0\" (UID: \"c02f586b-acea-434e-9258-d9cd407b3595\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.030770 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.031083 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7af36017-bcb5-46cb-a07d-d65dd6152f6f" containerName="nova-scheduler-scheduler" containerID="cri-o://cdb762fb5d41ad81cfb0a1769b50fe224d7248e7f75f54edc6d023f6820a4e1e" gracePeriod=30 Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.065636 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02f586b-acea-434e-9258-d9cd407b3595-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c02f586b-acea-434e-9258-d9cd407b3595\") " pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.138174 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.157559 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.279097 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.279170 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.522432 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.637295 4625 generic.go:334] "Generic (PLEG): container finished" podID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerID="0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c" exitCode=143 Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.637512 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3","Type":"ContainerDied","Data":"0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c"} Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.647788 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"34ec415c-0f48-4b5b-98f6-6f854c2910ee","Type":"ContainerStarted","Data":"95cd405060f2e325825162bf5a8d16c1560efe469781a9e30b3c7ba83b13b6bf"} Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.648019 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" containerName="nova-metadata-log" containerID="cri-o://03ca74cc18c72cefe25b00917108eb5ffcfe9ed447f7cef16c8474fe12aa1405" gracePeriod=30 Dec 02 14:08:19 crc kubenswrapper[4625]: I1202 14:08:19.648660 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" containerName="nova-metadata-metadata" containerID="cri-o://f2ffae93c2102d8e8b295fa30f05ab98177b597f49654457f11886e9beb71e74" gracePeriod=30 Dec 02 14:08:20 crc kubenswrapper[4625]: I1202 14:08:20.062708 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 02 14:08:20 crc kubenswrapper[4625]: I1202 14:08:20.664423 4625 generic.go:334] "Generic (PLEG): container finished" podID="ed348874-4978-434a-bc74-c58f1acc7c05" containerID="03ca74cc18c72cefe25b00917108eb5ffcfe9ed447f7cef16c8474fe12aa1405" exitCode=143 Dec 02 14:08:20 crc kubenswrapper[4625]: I1202 14:08:20.664650 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ed348874-4978-434a-bc74-c58f1acc7c05","Type":"ContainerDied","Data":"03ca74cc18c72cefe25b00917108eb5ffcfe9ed447f7cef16c8474fe12aa1405"} Dec 02 14:08:20 crc kubenswrapper[4625]: I1202 14:08:20.669440 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c02f586b-acea-434e-9258-d9cd407b3595","Type":"ContainerStarted","Data":"485dcf6f343a9c73a76484e9264381ba69694c189cdfe262f0b46b0582bb3045"} Dec 02 14:08:20 crc kubenswrapper[4625]: I1202 14:08:20.669497 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c02f586b-acea-434e-9258-d9cd407b3595","Type":"ContainerStarted","Data":"9011309003f82bf76af06180f48d737d2088cb5a9de8b0475bb82aa3152bea95"} Dec 02 14:08:20 crc kubenswrapper[4625]: I1202 14:08:20.671112 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:20 crc kubenswrapper[4625]: I1202 14:08:20.679630 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"34ec415c-0f48-4b5b-98f6-6f854c2910ee","Type":"ContainerStarted","Data":"e933d6a4c62d63d60375aa9f94ef3f6709b1e0aed7445d3f8b7ea9c2a064ba98"} Dec 02 14:08:20 crc kubenswrapper[4625]: I1202 14:08:20.680799 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 02 14:08:20 crc kubenswrapper[4625]: I1202 14:08:20.735351 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.735294481 podStartE2EDuration="2.735294481s" podCreationTimestamp="2025-12-02 14:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:08:20.703031605 +0000 UTC m=+1456.665208680" watchObservedRunningTime="2025-12-02 14:08:20.735294481 +0000 UTC m=+1456.697471556" Dec 02 14:08:21 crc kubenswrapper[4625]: I1202 14:08:21.410934 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.020079482 podStartE2EDuration="4.410907361s" podCreationTimestamp="2025-12-02 14:08:17 +0000 UTC" firstStartedPulling="2025-12-02 14:08:19.581669343 +0000 UTC m=+1455.543846418" lastFinishedPulling="2025-12-02 14:08:19.972497222 +0000 UTC m=+1455.934674297" observedRunningTime="2025-12-02 14:08:20.742645062 +0000 UTC m=+1456.704822137" watchObservedRunningTime="2025-12-02 14:08:21.410907361 +0000 UTC m=+1457.373084436" Dec 02 14:08:21 crc kubenswrapper[4625]: I1202 14:08:21.416704 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:21 crc kubenswrapper[4625]: I1202 14:08:21.417091 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="ceilometer-central-agent" containerID="cri-o://bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59" gracePeriod=30 Dec 02 14:08:21 crc kubenswrapper[4625]: I1202 14:08:21.417168 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="proxy-httpd" containerID="cri-o://2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7" gracePeriod=30 Dec 02 14:08:21 crc kubenswrapper[4625]: I1202 14:08:21.417218 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="ceilometer-notification-agent" containerID="cri-o://b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73" gracePeriod=30 Dec 02 14:08:21 crc kubenswrapper[4625]: I1202 14:08:21.417274 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="sg-core" containerID="cri-o://ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde" gracePeriod=30 Dec 02 14:08:21 crc kubenswrapper[4625]: I1202 14:08:21.695558 4625 generic.go:334] "Generic (PLEG): container finished" podID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerID="ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde" exitCode=2 Dec 02 14:08:21 crc kubenswrapper[4625]: I1202 14:08:21.695642 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a55923-8fce-4593-ab7c-5b399ceeddcf","Type":"ContainerDied","Data":"ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde"} Dec 02 14:08:22 crc kubenswrapper[4625]: I1202 14:08:22.585495 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 14:08:22 crc kubenswrapper[4625]: I1202 14:08:22.585567 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 14:08:22 crc kubenswrapper[4625]: I1202 14:08:22.729270 4625 generic.go:334] "Generic (PLEG): container finished" podID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerID="2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7" exitCode=0 Dec 02 14:08:22 crc kubenswrapper[4625]: I1202 14:08:22.729363 4625 generic.go:334] "Generic (PLEG): container finished" podID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerID="bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59" exitCode=0 Dec 02 14:08:22 crc kubenswrapper[4625]: I1202 14:08:22.729709 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a55923-8fce-4593-ab7c-5b399ceeddcf","Type":"ContainerDied","Data":"2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7"} Dec 02 14:08:22 crc kubenswrapper[4625]: I1202 14:08:22.729759 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a55923-8fce-4593-ab7c-5b399ceeddcf","Type":"ContainerDied","Data":"bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59"} Dec 02 14:08:23 crc kubenswrapper[4625]: E1202 14:08:23.470119 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdb762fb5d41ad81cfb0a1769b50fe224d7248e7f75f54edc6d023f6820a4e1e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:08:23 crc kubenswrapper[4625]: E1202 14:08:23.472968 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdb762fb5d41ad81cfb0a1769b50fe224d7248e7f75f54edc6d023f6820a4e1e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:08:23 crc kubenswrapper[4625]: E1202 14:08:23.476186 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdb762fb5d41ad81cfb0a1769b50fe224d7248e7f75f54edc6d023f6820a4e1e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:08:23 crc kubenswrapper[4625]: E1202 14:08:23.476266 4625 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7af36017-bcb5-46cb-a07d-d65dd6152f6f" containerName="nova-scheduler-scheduler" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.521341 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.632893 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-logs\") pod \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.633691 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bggt\" (UniqueName: \"kubernetes.io/projected/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-kube-api-access-6bggt\") pod \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.633843 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-logs" (OuterVolumeSpecName: "logs") pod "6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" (UID: "6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.633991 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-config-data\") pod \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.634173 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-combined-ca-bundle\") pod \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\" (UID: \"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3\") " Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.636789 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.643763 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-kube-api-access-6bggt" (OuterVolumeSpecName: "kube-api-access-6bggt") pod "6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" (UID: "6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3"). InnerVolumeSpecName "kube-api-access-6bggt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.686092 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" (UID: "6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.701475 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-config-data" (OuterVolumeSpecName: "config-data") pod "6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" (UID: "6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.758344 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.758394 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bggt\" (UniqueName: \"kubernetes.io/projected/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-kube-api-access-6bggt\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.758419 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.792503 4625 generic.go:334] "Generic (PLEG): container finished" podID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerID="20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a" exitCode=0 Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.792701 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3","Type":"ContainerDied","Data":"20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a"} Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.792872 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3","Type":"ContainerDied","Data":"4c90cf55136bb8a83ee22c044515738fc00122279b075dbaba6d0652d048cf20"} Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.792961 4625 scope.go:117] "RemoveContainer" containerID="20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.793191 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:08:24 crc kubenswrapper[4625]: I1202 14:08:24.941624 4625 scope.go:117] "RemoveContainer" containerID="0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.040502 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.047297 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.092401 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:25 crc kubenswrapper[4625]: E1202 14:08:25.093029 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerName="nova-api-api" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.093051 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerName="nova-api-api" Dec 02 14:08:25 crc kubenswrapper[4625]: E1202 14:08:25.093074 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerName="nova-api-log" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.093082 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerName="nova-api-log" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.093327 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerName="nova-api-log" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.093344 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" containerName="nova-api-api" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.094640 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.097727 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.106359 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.117617 4625 scope.go:117] "RemoveContainer" containerID="20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a" Dec 02 14:08:25 crc kubenswrapper[4625]: E1202 14:08:25.129643 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a\": container with ID starting with 20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a not found: ID does not exist" containerID="20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.130166 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a"} err="failed to get container status \"20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a\": rpc error: code = NotFound desc = could not find container \"20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a\": container with ID starting with 20204708912e834d5e442d2e2dd2a356c2c42074bce8fd7453175185dc99ca8a not found: ID does not exist" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.130282 4625 scope.go:117] "RemoveContainer" containerID="0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c" Dec 02 14:08:25 crc kubenswrapper[4625]: E1202 14:08:25.139102 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c\": container with ID starting with 0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c not found: ID does not exist" containerID="0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.139156 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c"} err="failed to get container status \"0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c\": rpc error: code = NotFound desc = could not find container \"0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c\": container with ID starting with 0d684b81dc0f0a038ca71757739ee40b2c2285e2394e6d07d85926f7fc449b9c not found: ID does not exist" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.274572 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jbcp\" (UniqueName: \"kubernetes.io/projected/5e6e2c85-21fe-464c-9954-940e1c3b138b-kube-api-access-5jbcp\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.274703 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.274823 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6e2c85-21fe-464c-9954-940e1c3b138b-logs\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.274855 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-config-data\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: E1202 14:08:25.306753 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65a55923_8fce_4593_ab7c_5b399ceeddcf.slice/crio-conmon-b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb94393_eb1c_4d1a_ae73_963cf1a8e1f3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65a55923_8fce_4593_ab7c_5b399ceeddcf.slice/crio-b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb94393_eb1c_4d1a_ae73_963cf1a8e1f3.slice/crio-4c90cf55136bb8a83ee22c044515738fc00122279b075dbaba6d0652d048cf20\": RecentStats: unable to find data in memory cache]" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.377443 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6e2c85-21fe-464c-9954-940e1c3b138b-logs\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.377497 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-config-data\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.377580 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jbcp\" (UniqueName: \"kubernetes.io/projected/5e6e2c85-21fe-464c-9954-940e1c3b138b-kube-api-access-5jbcp\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.377678 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.379481 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6e2c85-21fe-464c-9954-940e1c3b138b-logs\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.391890 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.406877 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jbcp\" (UniqueName: \"kubernetes.io/projected/5e6e2c85-21fe-464c-9954-940e1c3b138b-kube-api-access-5jbcp\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.417633 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-config-data\") pod \"nova-api-0\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.461102 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.542159 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.691752 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-scripts\") pod \"65a55923-8fce-4593-ab7c-5b399ceeddcf\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.692293 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-sg-core-conf-yaml\") pod \"65a55923-8fce-4593-ab7c-5b399ceeddcf\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.692404 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-log-httpd\") pod \"65a55923-8fce-4593-ab7c-5b399ceeddcf\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.692746 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-combined-ca-bundle\") pod \"65a55923-8fce-4593-ab7c-5b399ceeddcf\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.692786 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-run-httpd\") pod \"65a55923-8fce-4593-ab7c-5b399ceeddcf\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.692842 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8cw8\" (UniqueName: \"kubernetes.io/projected/65a55923-8fce-4593-ab7c-5b399ceeddcf-kube-api-access-n8cw8\") pod \"65a55923-8fce-4593-ab7c-5b399ceeddcf\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.692875 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-config-data\") pod \"65a55923-8fce-4593-ab7c-5b399ceeddcf\" (UID: \"65a55923-8fce-4593-ab7c-5b399ceeddcf\") " Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.697405 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-scripts" (OuterVolumeSpecName: "scripts") pod "65a55923-8fce-4593-ab7c-5b399ceeddcf" (UID: "65a55923-8fce-4593-ab7c-5b399ceeddcf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.697738 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65a55923-8fce-4593-ab7c-5b399ceeddcf" (UID: "65a55923-8fce-4593-ab7c-5b399ceeddcf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.699639 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65a55923-8fce-4593-ab7c-5b399ceeddcf" (UID: "65a55923-8fce-4593-ab7c-5b399ceeddcf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.702024 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a55923-8fce-4593-ab7c-5b399ceeddcf-kube-api-access-n8cw8" (OuterVolumeSpecName: "kube-api-access-n8cw8") pod "65a55923-8fce-4593-ab7c-5b399ceeddcf" (UID: "65a55923-8fce-4593-ab7c-5b399ceeddcf"). InnerVolumeSpecName "kube-api-access-n8cw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.772782 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65a55923-8fce-4593-ab7c-5b399ceeddcf" (UID: "65a55923-8fce-4593-ab7c-5b399ceeddcf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.798121 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.798163 4625 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.798200 4625 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.798212 4625 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65a55923-8fce-4593-ab7c-5b399ceeddcf-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.798225 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8cw8\" (UniqueName: \"kubernetes.io/projected/65a55923-8fce-4593-ab7c-5b399ceeddcf-kube-api-access-n8cw8\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.815127 4625 generic.go:334] "Generic (PLEG): container finished" podID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerID="b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73" exitCode=0 Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.815400 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.816631 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a55923-8fce-4593-ab7c-5b399ceeddcf","Type":"ContainerDied","Data":"b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73"} Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.816679 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65a55923-8fce-4593-ab7c-5b399ceeddcf","Type":"ContainerDied","Data":"f9e6e215624d80ee110c8b074bf3d1a0c0d008207305b001de780417e9201f0e"} Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.816705 4625 scope.go:117] "RemoveContainer" containerID="2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.837752 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65a55923-8fce-4593-ab7c-5b399ceeddcf" (UID: "65a55923-8fce-4593-ab7c-5b399ceeddcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.838657 4625 generic.go:334] "Generic (PLEG): container finished" podID="7af36017-bcb5-46cb-a07d-d65dd6152f6f" containerID="cdb762fb5d41ad81cfb0a1769b50fe224d7248e7f75f54edc6d023f6820a4e1e" exitCode=0 Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.838803 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7af36017-bcb5-46cb-a07d-d65dd6152f6f","Type":"ContainerDied","Data":"cdb762fb5d41ad81cfb0a1769b50fe224d7248e7f75f54edc6d023f6820a4e1e"} Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.855815 4625 scope.go:117] "RemoveContainer" containerID="ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.910085 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-config-data" (OuterVolumeSpecName: "config-data") pod "65a55923-8fce-4593-ab7c-5b399ceeddcf" (UID: "65a55923-8fce-4593-ab7c-5b399ceeddcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.932590 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.932631 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65a55923-8fce-4593-ab7c-5b399ceeddcf-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.943515 4625 scope.go:117] "RemoveContainer" containerID="b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73" Dec 02 14:08:25 crc kubenswrapper[4625]: I1202 14:08:25.981743 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.035769 4625 scope.go:117] "RemoveContainer" containerID="bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.100903 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.114892 4625 scope.go:117] "RemoveContainer" containerID="2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7" Dec 02 14:08:26 crc kubenswrapper[4625]: E1202 14:08:26.115539 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7\": container with ID starting with 2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7 not found: ID does not exist" containerID="2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.115570 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7"} err="failed to get container status \"2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7\": rpc error: code = NotFound desc = could not find container \"2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7\": container with ID starting with 2afe16dd9b6a3b54ae8d7542ca0a7d0040550d1bd6c481bfb6f796e10fb903a7 not found: ID does not exist" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.115596 4625 scope.go:117] "RemoveContainer" containerID="ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde" Dec 02 14:08:26 crc kubenswrapper[4625]: E1202 14:08:26.115949 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde\": container with ID starting with ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde not found: ID does not exist" containerID="ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.116012 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde"} err="failed to get container status \"ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde\": rpc error: code = NotFound desc = could not find container \"ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde\": container with ID starting with ac9ba79467fe4218a1291fad8aeff3ed837383f370164bb4645a4e5eebed2bde not found: ID does not exist" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.116030 4625 scope.go:117] "RemoveContainer" containerID="b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73" Dec 02 14:08:26 crc kubenswrapper[4625]: E1202 14:08:26.116597 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73\": container with ID starting with b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73 not found: ID does not exist" containerID="b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.116654 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73"} err="failed to get container status \"b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73\": rpc error: code = NotFound desc = could not find container \"b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73\": container with ID starting with b6e6d8376835b9c77ac602b2620afdc252cef4e8ab26b49b859c540cf6469d73 not found: ID does not exist" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.116692 4625 scope.go:117] "RemoveContainer" containerID="bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59" Dec 02 14:08:26 crc kubenswrapper[4625]: E1202 14:08:26.116992 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59\": container with ID starting with bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59 not found: ID does not exist" containerID="bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.117013 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59"} err="failed to get container status \"bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59\": rpc error: code = NotFound desc = could not find container \"bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59\": container with ID starting with bb4a0f59934bb97e26204529927b1dd947926ee0e5e6ca193eef95d918d0bd59 not found: ID does not exist" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.136028 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-combined-ca-bundle\") pod \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.136210 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9jlf\" (UniqueName: \"kubernetes.io/projected/7af36017-bcb5-46cb-a07d-d65dd6152f6f-kube-api-access-p9jlf\") pod \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.136351 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-config-data\") pod \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\" (UID: \"7af36017-bcb5-46cb-a07d-d65dd6152f6f\") " Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.242816 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.258821 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af36017-bcb5-46cb-a07d-d65dd6152f6f-kube-api-access-p9jlf" (OuterVolumeSpecName: "kube-api-access-p9jlf") pod "7af36017-bcb5-46cb-a07d-d65dd6152f6f" (UID: "7af36017-bcb5-46cb-a07d-d65dd6152f6f"). InnerVolumeSpecName "kube-api-access-p9jlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.272947 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-config-data" (OuterVolumeSpecName: "config-data") pod "7af36017-bcb5-46cb-a07d-d65dd6152f6f" (UID: "7af36017-bcb5-46cb-a07d-d65dd6152f6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.273121 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7af36017-bcb5-46cb-a07d-d65dd6152f6f" (UID: "7af36017-bcb5-46cb-a07d-d65dd6152f6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.327020 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.339263 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:26 crc kubenswrapper[4625]: E1202 14:08:26.340254 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="ceilometer-central-agent" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.340405 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="ceilometer-central-agent" Dec 02 14:08:26 crc kubenswrapper[4625]: E1202 14:08:26.340518 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af36017-bcb5-46cb-a07d-d65dd6152f6f" containerName="nova-scheduler-scheduler" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.340613 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af36017-bcb5-46cb-a07d-d65dd6152f6f" containerName="nova-scheduler-scheduler" Dec 02 14:08:26 crc kubenswrapper[4625]: E1202 14:08:26.340797 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="proxy-httpd" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.340885 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="proxy-httpd" Dec 02 14:08:26 crc kubenswrapper[4625]: E1202 14:08:26.341002 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="ceilometer-notification-agent" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.341092 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="ceilometer-notification-agent" Dec 02 14:08:26 crc kubenswrapper[4625]: E1202 14:08:26.341206 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="sg-core" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.341462 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="sg-core" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.341826 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af36017-bcb5-46cb-a07d-d65dd6152f6f" containerName="nova-scheduler-scheduler" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.341960 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="ceilometer-notification-agent" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.342073 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="sg-core" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.342189 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="ceilometer-central-agent" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.342360 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" containerName="proxy-httpd" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.347882 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.347918 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9jlf\" (UniqueName: \"kubernetes.io/projected/7af36017-bcb5-46cb-a07d-d65dd6152f6f-kube-api-access-p9jlf\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.347930 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7af36017-bcb5-46cb-a07d-d65dd6152f6f-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.355330 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.355938 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.360489 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.360732 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.360952 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.372063 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.412255 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.452540 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-run-httpd\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.458097 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-config-data\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.458535 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.458632 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.458750 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v456\" (UniqueName: \"kubernetes.io/projected/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-kube-api-access-4v456\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.458954 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-scripts\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.459224 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.459329 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-log-httpd\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.562610 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.562658 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-log-httpd\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.562711 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-run-httpd\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.562754 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-config-data\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.562795 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.562824 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.562859 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v456\" (UniqueName: \"kubernetes.io/projected/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-kube-api-access-4v456\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.562921 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-scripts\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.565826 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xf8ps"] Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.569645 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-log-httpd\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.569789 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-run-httpd\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.574830 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.575724 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.578056 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-config-data\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.590782 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.595380 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-scripts\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.598098 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v456\" (UniqueName: \"kubernetes.io/projected/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-kube-api-access-4v456\") pod \"ceilometer-0\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.722998 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.872033 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.879925 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a55923-8fce-4593-ab7c-5b399ceeddcf" path="/var/lib/kubelet/pods/65a55923-8fce-4593-ab7c-5b399ceeddcf/volumes" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.884119 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3" path="/var/lib/kubelet/pods/6fb94393-eb1c-4d1a-ae73-963cf1a8e1f3/volumes" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.885817 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7af36017-bcb5-46cb-a07d-d65dd6152f6f","Type":"ContainerDied","Data":"41f9619bf04a9d3f8c240e1d02c350259c9487023ade9dcefc26043284f259f0"} Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.888611 4625 scope.go:117] "RemoveContainer" containerID="cdb762fb5d41ad81cfb0a1769b50fe224d7248e7f75f54edc6d023f6820a4e1e" Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.909297 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6e2c85-21fe-464c-9954-940e1c3b138b","Type":"ContainerStarted","Data":"590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771"} Dec 02 14:08:26 crc kubenswrapper[4625]: I1202 14:08:26.909401 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6e2c85-21fe-464c-9954-940e1c3b138b","Type":"ContainerStarted","Data":"b3312b2deb349b964877bce7acabe259a130d461ccd51d61de57a28bcbc2359b"} Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.004423 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.023434 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.036741 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.043133 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.050749 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.067318 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.079192 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.079269 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j5sw\" (UniqueName: \"kubernetes.io/projected/d1153514-f059-41ea-8eb2-e95aac32f061-kube-api-access-9j5sw\") pod \"nova-scheduler-0\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.079296 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-config-data\") pod \"nova-scheduler-0\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.190573 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.190889 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j5sw\" (UniqueName: \"kubernetes.io/projected/d1153514-f059-41ea-8eb2-e95aac32f061-kube-api-access-9j5sw\") pod \"nova-scheduler-0\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.190967 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-config-data\") pod \"nova-scheduler-0\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.201826 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-config-data\") pod \"nova-scheduler-0\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.213584 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.263071 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j5sw\" (UniqueName: \"kubernetes.io/projected/d1153514-f059-41ea-8eb2-e95aac32f061-kube-api-access-9j5sw\") pod \"nova-scheduler-0\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.460595 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.508396 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:27 crc kubenswrapper[4625]: W1202 14:08:27.541248 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcbed4fb_57cf_43b7_9916_d0d5d26dba16.slice/crio-0f3308bc81475022096691753b76b0fcab0fda7477d43fa6bb9380da48ee001b WatchSource:0}: Error finding container 0f3308bc81475022096691753b76b0fcab0fda7477d43fa6bb9380da48ee001b: Status 404 returned error can't find the container with id 0f3308bc81475022096691753b76b0fcab0fda7477d43fa6bb9380da48ee001b Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.944386 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcbed4fb-57cf-43b7-9916-d0d5d26dba16","Type":"ContainerStarted","Data":"0f3308bc81475022096691753b76b0fcab0fda7477d43fa6bb9380da48ee001b"} Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.952445 4625 generic.go:334] "Generic (PLEG): container finished" podID="ed348874-4978-434a-bc74-c58f1acc7c05" containerID="f2ffae93c2102d8e8b295fa30f05ab98177b597f49654457f11886e9beb71e74" exitCode=0 Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.952747 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xf8ps" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerName="registry-server" containerID="cri-o://0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c" gracePeriod=2 Dec 02 14:08:27 crc kubenswrapper[4625]: I1202 14:08:27.953099 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ed348874-4978-434a-bc74-c58f1acc7c05","Type":"ContainerDied","Data":"f2ffae93c2102d8e8b295fa30f05ab98177b597f49654457f11886e9beb71e74"} Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.247712 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.267841 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.532634 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.563719 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwrd7\" (UniqueName: \"kubernetes.io/projected/ed348874-4978-434a-bc74-c58f1acc7c05-kube-api-access-zwrd7\") pod \"ed348874-4978-434a-bc74-c58f1acc7c05\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.563844 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-combined-ca-bundle\") pod \"ed348874-4978-434a-bc74-c58f1acc7c05\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.563961 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed348874-4978-434a-bc74-c58f1acc7c05-logs\") pod \"ed348874-4978-434a-bc74-c58f1acc7c05\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.564165 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-nova-metadata-tls-certs\") pod \"ed348874-4978-434a-bc74-c58f1acc7c05\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.564226 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-config-data\") pod \"ed348874-4978-434a-bc74-c58f1acc7c05\" (UID: \"ed348874-4978-434a-bc74-c58f1acc7c05\") " Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.581344 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed348874-4978-434a-bc74-c58f1acc7c05-logs" (OuterVolumeSpecName: "logs") pod "ed348874-4978-434a-bc74-c58f1acc7c05" (UID: "ed348874-4978-434a-bc74-c58f1acc7c05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.601759 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed348874-4978-434a-bc74-c58f1acc7c05-kube-api-access-zwrd7" (OuterVolumeSpecName: "kube-api-access-zwrd7") pod "ed348874-4978-434a-bc74-c58f1acc7c05" (UID: "ed348874-4978-434a-bc74-c58f1acc7c05"). InnerVolumeSpecName "kube-api-access-zwrd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.670649 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwrd7\" (UniqueName: \"kubernetes.io/projected/ed348874-4978-434a-bc74-c58f1acc7c05-kube-api-access-zwrd7\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.670682 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed348874-4978-434a-bc74-c58f1acc7c05-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.709724 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-config-data" (OuterVolumeSpecName: "config-data") pod "ed348874-4978-434a-bc74-c58f1acc7c05" (UID: "ed348874-4978-434a-bc74-c58f1acc7c05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.716609 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed348874-4978-434a-bc74-c58f1acc7c05" (UID: "ed348874-4978-434a-bc74-c58f1acc7c05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.774327 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.774376 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.786969 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.817095 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ed348874-4978-434a-bc74-c58f1acc7c05" (UID: "ed348874-4978-434a-bc74-c58f1acc7c05"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.875817 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6lkm\" (UniqueName: \"kubernetes.io/projected/eccd8f47-db22-4700-8db7-b5e94abc38ed-kube-api-access-l6lkm\") pod \"eccd8f47-db22-4700-8db7-b5e94abc38ed\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.876454 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-utilities\") pod \"eccd8f47-db22-4700-8db7-b5e94abc38ed\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.876530 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-catalog-content\") pod \"eccd8f47-db22-4700-8db7-b5e94abc38ed\" (UID: \"eccd8f47-db22-4700-8db7-b5e94abc38ed\") " Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.877137 4625 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed348874-4978-434a-bc74-c58f1acc7c05-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.886977 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-utilities" (OuterVolumeSpecName: "utilities") pod "eccd8f47-db22-4700-8db7-b5e94abc38ed" (UID: "eccd8f47-db22-4700-8db7-b5e94abc38ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.887225 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eccd8f47-db22-4700-8db7-b5e94abc38ed-kube-api-access-l6lkm" (OuterVolumeSpecName: "kube-api-access-l6lkm") pod "eccd8f47-db22-4700-8db7-b5e94abc38ed" (UID: "eccd8f47-db22-4700-8db7-b5e94abc38ed"). InnerVolumeSpecName "kube-api-access-l6lkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.903945 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af36017-bcb5-46cb-a07d-d65dd6152f6f" path="/var/lib/kubelet/pods/7af36017-bcb5-46cb-a07d-d65dd6152f6f/volumes" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.978878 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6lkm\" (UniqueName: \"kubernetes.io/projected/eccd8f47-db22-4700-8db7-b5e94abc38ed-kube-api-access-l6lkm\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.978912 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.980907 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.981083 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ed348874-4978-434a-bc74-c58f1acc7c05","Type":"ContainerDied","Data":"c56a65fe38cb734998f7fe0bbc5e1f6a7e3f36269711853856cd69f60cc48572"} Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.981305 4625 scope.go:117] "RemoveContainer" containerID="f2ffae93c2102d8e8b295fa30f05ab98177b597f49654457f11886e9beb71e74" Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.985819 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1153514-f059-41ea-8eb2-e95aac32f061","Type":"ContainerStarted","Data":"4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57"} Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.985941 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1153514-f059-41ea-8eb2-e95aac32f061","Type":"ContainerStarted","Data":"f98f1e1d4bd01bb98494edd1797180bf9d2649839f3f4e81f6e41d0bbadea6c7"} Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.993482 4625 generic.go:334] "Generic (PLEG): container finished" podID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerID="0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c" exitCode=0 Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.993584 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xf8ps" event={"ID":"eccd8f47-db22-4700-8db7-b5e94abc38ed","Type":"ContainerDied","Data":"0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c"} Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.993880 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xf8ps" event={"ID":"eccd8f47-db22-4700-8db7-b5e94abc38ed","Type":"ContainerDied","Data":"17b98bceb6c067c0bd086363d718f7713994382252df25b0dbbb5e45cefd66fc"} Dec 02 14:08:28 crc kubenswrapper[4625]: I1202 14:08:28.994044 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xf8ps" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.007049 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6e2c85-21fe-464c-9954-940e1c3b138b","Type":"ContainerStarted","Data":"84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043"} Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.039626 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.039591645 podStartE2EDuration="3.039591645s" podCreationTimestamp="2025-12-02 14:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:08:29.025762247 +0000 UTC m=+1464.987939312" watchObservedRunningTime="2025-12-02 14:08:29.039591645 +0000 UTC m=+1465.001768720" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.073791 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.103630 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.116266 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eccd8f47-db22-4700-8db7-b5e94abc38ed" (UID: "eccd8f47-db22-4700-8db7-b5e94abc38ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.119088 4625 scope.go:117] "RemoveContainer" containerID="03ca74cc18c72cefe25b00917108eb5ffcfe9ed447f7cef16c8474fe12aa1405" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.129934 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:29 crc kubenswrapper[4625]: E1202 14:08:29.130495 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerName="extract-content" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.130516 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerName="extract-content" Dec 02 14:08:29 crc kubenswrapper[4625]: E1202 14:08:29.130535 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerName="extract-utilities" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.130541 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerName="extract-utilities" Dec 02 14:08:29 crc kubenswrapper[4625]: E1202 14:08:29.130550 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" containerName="nova-metadata-metadata" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.130557 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" containerName="nova-metadata-metadata" Dec 02 14:08:29 crc kubenswrapper[4625]: E1202 14:08:29.130574 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" containerName="nova-metadata-log" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.130580 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" containerName="nova-metadata-log" Dec 02 14:08:29 crc kubenswrapper[4625]: E1202 14:08:29.130618 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerName="registry-server" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.130624 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerName="registry-server" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.130807 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" containerName="registry-server" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.130821 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" containerName="nova-metadata-log" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.136526 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" containerName="nova-metadata-metadata" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.137836 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.143187 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.143151709 podStartE2EDuration="4.143151709s" podCreationTimestamp="2025-12-02 14:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:08:29.11075176 +0000 UTC m=+1465.072928835" watchObservedRunningTime="2025-12-02 14:08:29.143151709 +0000 UTC m=+1465.105328784" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.144134 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.144413 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.185512 4625 scope.go:117] "RemoveContainer" containerID="0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.191706 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-config-data\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.192066 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9fcr\" (UniqueName: \"kubernetes.io/projected/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-kube-api-access-j9fcr\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.192107 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-logs\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.192163 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.192231 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.192460 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eccd8f47-db22-4700-8db7-b5e94abc38ed-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.200481 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.211556 4625 scope.go:117] "RemoveContainer" containerID="2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.219260 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.246991 4625 scope.go:117] "RemoveContainer" containerID="770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.295132 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-config-data\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.295221 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9fcr\" (UniqueName: \"kubernetes.io/projected/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-kube-api-access-j9fcr\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.295278 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-logs\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.295360 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.295456 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.297198 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-logs\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.308210 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.310742 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.310904 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-config-data\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.310977 4625 scope.go:117] "RemoveContainer" containerID="0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c" Dec 02 14:08:29 crc kubenswrapper[4625]: E1202 14:08:29.312339 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c\": container with ID starting with 0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c not found: ID does not exist" containerID="0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.312394 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c"} err="failed to get container status \"0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c\": rpc error: code = NotFound desc = could not find container \"0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c\": container with ID starting with 0b72565b371899063673d0cab17739af04098ccc796e9eb62afd206d1dffc37c not found: ID does not exist" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.312433 4625 scope.go:117] "RemoveContainer" containerID="2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0" Dec 02 14:08:29 crc kubenswrapper[4625]: E1202 14:08:29.313560 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0\": container with ID starting with 2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0 not found: ID does not exist" containerID="2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.313588 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0"} err="failed to get container status \"2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0\": rpc error: code = NotFound desc = could not find container \"2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0\": container with ID starting with 2f3832feacd68b65faa3bb68e421790d181d48380521042f8c4784e0bb7242f0 not found: ID does not exist" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.313608 4625 scope.go:117] "RemoveContainer" containerID="770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88" Dec 02 14:08:29 crc kubenswrapper[4625]: E1202 14:08:29.314721 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88\": container with ID starting with 770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88 not found: ID does not exist" containerID="770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.314818 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88"} err="failed to get container status \"770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88\": rpc error: code = NotFound desc = could not find container \"770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88\": container with ID starting with 770065d3faa5033255c2095a53dc39dba4b0add1c3434fc9fdfec75349e62d88 not found: ID does not exist" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.323622 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9fcr\" (UniqueName: \"kubernetes.io/projected/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-kube-api-access-j9fcr\") pod \"nova-metadata-0\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.461715 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xf8ps"] Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.473300 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xf8ps"] Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.493936 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:08:29 crc kubenswrapper[4625]: I1202 14:08:29.902392 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:08:30 crc kubenswrapper[4625]: I1202 14:08:30.122068 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcbed4fb-57cf-43b7-9916-d0d5d26dba16","Type":"ContainerStarted","Data":"a575e7f61940f08a2599201d4a6d3df5ef80a82150a304a5322c29eb80f59cd5"} Dec 02 14:08:30 crc kubenswrapper[4625]: I1202 14:08:30.123667 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcbed4fb-57cf-43b7-9916-d0d5d26dba16","Type":"ContainerStarted","Data":"29d5866eaae8e13255ebe324687ac12a8b420885c897dd48b4bca6f5321cf172"} Dec 02 14:08:30 crc kubenswrapper[4625]: I1202 14:08:30.152283 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4","Type":"ContainerStarted","Data":"04a54c46b4dc243456da4b3c85bb807ec3893d52872e2a2b6f3ff1069ec7adb1"} Dec 02 14:08:31 crc kubenswrapper[4625]: I1202 14:08:31.013243 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eccd8f47-db22-4700-8db7-b5e94abc38ed" path="/var/lib/kubelet/pods/eccd8f47-db22-4700-8db7-b5e94abc38ed/volumes" Dec 02 14:08:31 crc kubenswrapper[4625]: I1202 14:08:31.016137 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed348874-4978-434a-bc74-c58f1acc7c05" path="/var/lib/kubelet/pods/ed348874-4978-434a-bc74-c58f1acc7c05/volumes" Dec 02 14:08:31 crc kubenswrapper[4625]: I1202 14:08:31.193329 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcbed4fb-57cf-43b7-9916-d0d5d26dba16","Type":"ContainerStarted","Data":"226205fc0b8c67e7dfa6a20516836a94602aa971fd5a7dddb0697ab0c1c028eb"} Dec 02 14:08:31 crc kubenswrapper[4625]: I1202 14:08:31.211462 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4","Type":"ContainerStarted","Data":"6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad"} Dec 02 14:08:31 crc kubenswrapper[4625]: I1202 14:08:31.211521 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4","Type":"ContainerStarted","Data":"f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6"} Dec 02 14:08:31 crc kubenswrapper[4625]: I1202 14:08:31.252676 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.252639032 podStartE2EDuration="2.252639032s" podCreationTimestamp="2025-12-02 14:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:08:31.24216808 +0000 UTC m=+1467.204345175" watchObservedRunningTime="2025-12-02 14:08:31.252639032 +0000 UTC m=+1467.214816107" Dec 02 14:08:32 crc kubenswrapper[4625]: I1202 14:08:32.461530 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 14:08:33 crc kubenswrapper[4625]: I1202 14:08:33.236572 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcbed4fb-57cf-43b7-9916-d0d5d26dba16","Type":"ContainerStarted","Data":"b8acbce62a479fb95ab3ccf6f65908407fa68ba4a9234721f65c672359a1aa8f"} Dec 02 14:08:33 crc kubenswrapper[4625]: I1202 14:08:33.237340 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:08:33 crc kubenswrapper[4625]: I1202 14:08:33.318302 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8558994650000002 podStartE2EDuration="7.318272897s" podCreationTimestamp="2025-12-02 14:08:26 +0000 UTC" firstStartedPulling="2025-12-02 14:08:27.599855892 +0000 UTC m=+1463.562032977" lastFinishedPulling="2025-12-02 14:08:32.062229334 +0000 UTC m=+1468.024406409" observedRunningTime="2025-12-02 14:08:33.296759178 +0000 UTC m=+1469.258936253" watchObservedRunningTime="2025-12-02 14:08:33.318272897 +0000 UTC m=+1469.280449972" Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.276929 4625 generic.go:334] "Generic (PLEG): container finished" podID="e40e6e8b-feb1-4a80-a962-bfad2645f094" containerID="a726198fe6d4784095c28bf76cc4768e8e333b70379c2a78b554fd47d6316b86" exitCode=137 Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.277142 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e40e6e8b-feb1-4a80-a962-bfad2645f094","Type":"ContainerDied","Data":"a726198fe6d4784095c28bf76cc4768e8e333b70379c2a78b554fd47d6316b86"} Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.495158 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.495226 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.551336 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.636843 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgmbc\" (UniqueName: \"kubernetes.io/projected/e40e6e8b-feb1-4a80-a962-bfad2645f094-kube-api-access-xgmbc\") pod \"e40e6e8b-feb1-4a80-a962-bfad2645f094\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.636915 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-combined-ca-bundle\") pod \"e40e6e8b-feb1-4a80-a962-bfad2645f094\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.637168 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-config-data\") pod \"e40e6e8b-feb1-4a80-a962-bfad2645f094\" (UID: \"e40e6e8b-feb1-4a80-a962-bfad2645f094\") " Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.661619 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40e6e8b-feb1-4a80-a962-bfad2645f094-kube-api-access-xgmbc" (OuterVolumeSpecName: "kube-api-access-xgmbc") pod "e40e6e8b-feb1-4a80-a962-bfad2645f094" (UID: "e40e6e8b-feb1-4a80-a962-bfad2645f094"). InnerVolumeSpecName "kube-api-access-xgmbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.692866 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e40e6e8b-feb1-4a80-a962-bfad2645f094" (UID: "e40e6e8b-feb1-4a80-a962-bfad2645f094"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.710492 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-config-data" (OuterVolumeSpecName: "config-data") pod "e40e6e8b-feb1-4a80-a962-bfad2645f094" (UID: "e40e6e8b-feb1-4a80-a962-bfad2645f094"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.740483 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgmbc\" (UniqueName: \"kubernetes.io/projected/e40e6e8b-feb1-4a80-a962-bfad2645f094-kube-api-access-xgmbc\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.740521 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:34 crc kubenswrapper[4625]: I1202 14:08:34.740531 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40e6e8b-feb1-4a80-a962-bfad2645f094-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.290292 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e40e6e8b-feb1-4a80-a962-bfad2645f094","Type":"ContainerDied","Data":"cd4f1e559a92e75217a95314ffc831f16d4934fb96d63a256e2a4c323220807f"} Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.290849 4625 scope.go:117] "RemoveContainer" containerID="a726198fe6d4784095c28bf76cc4768e8e333b70379c2a78b554fd47d6316b86" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.290410 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.620723 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.622367 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.639405 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.648973 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.731656 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:08:35 crc kubenswrapper[4625]: E1202 14:08:35.732427 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40e6e8b-feb1-4a80-a962-bfad2645f094" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.732448 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40e6e8b-feb1-4a80-a962-bfad2645f094" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.732713 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40e6e8b-feb1-4a80-a962-bfad2645f094" containerName="nova-cell1-novncproxy-novncproxy" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.733815 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.766184 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.766512 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.766634 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.783041 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.935831 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.935929 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.935986 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-899dm\" (UniqueName: \"kubernetes.io/projected/ae13d171-d7b4-4f87-b94b-b19de24b35b6-kube-api-access-899dm\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.936020 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:35 crc kubenswrapper[4625]: I1202 14:08:35.936062 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.039832 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.039994 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-899dm\" (UniqueName: \"kubernetes.io/projected/ae13d171-d7b4-4f87-b94b-b19de24b35b6-kube-api-access-899dm\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.040049 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.040120 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.040268 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.056596 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.064255 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.075099 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.091173 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae13d171-d7b4-4f87-b94b-b19de24b35b6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.111981 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-899dm\" (UniqueName: \"kubernetes.io/projected/ae13d171-d7b4-4f87-b94b-b19de24b35b6-kube-api-access-899dm\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae13d171-d7b4-4f87-b94b-b19de24b35b6\") " pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.371992 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.709484 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.710147 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:08:36 crc kubenswrapper[4625]: I1202 14:08:36.879167 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40e6e8b-feb1-4a80-a962-bfad2645f094" path="/var/lib/kubelet/pods/e40e6e8b-feb1-4a80-a962-bfad2645f094/volumes" Dec 02 14:08:37 crc kubenswrapper[4625]: I1202 14:08:37.018759 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 02 14:08:37 crc kubenswrapper[4625]: W1202 14:08:37.027012 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae13d171_d7b4_4f87_b94b_b19de24b35b6.slice/crio-c740b0fa2cb860f01e17792701ae6c8efb7e25d3ad5da75241975fb72b20be78 WatchSource:0}: Error finding container c740b0fa2cb860f01e17792701ae6c8efb7e25d3ad5da75241975fb72b20be78: Status 404 returned error can't find the container with id c740b0fa2cb860f01e17792701ae6c8efb7e25d3ad5da75241975fb72b20be78 Dec 02 14:08:37 crc kubenswrapper[4625]: I1202 14:08:37.320400 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ae13d171-d7b4-4f87-b94b-b19de24b35b6","Type":"ContainerStarted","Data":"c740b0fa2cb860f01e17792701ae6c8efb7e25d3ad5da75241975fb72b20be78"} Dec 02 14:08:37 crc kubenswrapper[4625]: I1202 14:08:37.461952 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 14:08:37 crc kubenswrapper[4625]: I1202 14:08:37.502655 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 14:08:38 crc kubenswrapper[4625]: I1202 14:08:38.359530 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ae13d171-d7b4-4f87-b94b-b19de24b35b6","Type":"ContainerStarted","Data":"5158c66c7dfdcfd0d9a329920ab2aa90aa9cd13849a7013b137c876f76879791"} Dec 02 14:08:38 crc kubenswrapper[4625]: I1202 14:08:38.386236 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.386203822 podStartE2EDuration="3.386203822s" podCreationTimestamp="2025-12-02 14:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:08:38.379645463 +0000 UTC m=+1474.341822528" watchObservedRunningTime="2025-12-02 14:08:38.386203822 +0000 UTC m=+1474.348380897" Dec 02 14:08:38 crc kubenswrapper[4625]: I1202 14:08:38.419234 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 14:08:39 crc kubenswrapper[4625]: I1202 14:08:39.495010 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 14:08:39 crc kubenswrapper[4625]: I1202 14:08:39.496414 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 14:08:40 crc kubenswrapper[4625]: I1202 14:08:40.511582 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:08:40 crc kubenswrapper[4625]: I1202 14:08:40.512467 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:08:41 crc kubenswrapper[4625]: I1202 14:08:41.373643 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:45 crc kubenswrapper[4625]: I1202 14:08:45.469683 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 14:08:45 crc kubenswrapper[4625]: I1202 14:08:45.470606 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 14:08:45 crc kubenswrapper[4625]: I1202 14:08:45.472160 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 14:08:45 crc kubenswrapper[4625]: I1202 14:08:45.475260 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.373184 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.402983 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.472021 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.475438 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.539235 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.810134 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-z6sn6"] Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.820961 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.845656 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-z6sn6"] Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.907744 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.908056 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.908235 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwg88\" (UniqueName: \"kubernetes.io/projected/610d1bad-c2ce-4533-914b-ed46676dc3b8-kube-api-access-kwg88\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.908291 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.908482 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:46 crc kubenswrapper[4625]: I1202 14:08:46.908509 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-config\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.011142 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.011247 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwg88\" (UniqueName: \"kubernetes.io/projected/610d1bad-c2ce-4533-914b-ed46676dc3b8-kube-api-access-kwg88\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.011281 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.011372 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.011391 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-config\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.011485 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.012540 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.013106 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.013610 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-s9zxp"] Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.013999 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.014591 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.015160 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-config\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.017864 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.021902 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.022216 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.043542 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwg88\" (UniqueName: \"kubernetes.io/projected/610d1bad-c2ce-4533-914b-ed46676dc3b8-kube-api-access-kwg88\") pod \"dnsmasq-dns-89c5cd4d5-z6sn6\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.072839 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-s9zxp"] Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.114025 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.114460 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-config-data\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.114621 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6p8k\" (UniqueName: \"kubernetes.io/projected/6bc01764-a48e-4ca9-b717-f79df542b76e-kube-api-access-g6p8k\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.114802 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-scripts\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.142112 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.216965 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.217659 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-config-data\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.217694 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6p8k\" (UniqueName: \"kubernetes.io/projected/6bc01764-a48e-4ca9-b717-f79df542b76e-kube-api-access-g6p8k\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.218754 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-scripts\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.226209 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-config-data\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.228037 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.228116 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-scripts\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.253936 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6p8k\" (UniqueName: \"kubernetes.io/projected/6bc01764-a48e-4ca9-b717-f79df542b76e-kube-api-access-g6p8k\") pod \"nova-cell1-cell-mapping-s9zxp\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.412022 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:08:47 crc kubenswrapper[4625]: W1202 14:08:47.972939 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod610d1bad_c2ce_4533_914b_ed46676dc3b8.slice/crio-0cc788643c8487a468c5927caf77a8dd20c6e4fdaa93ac30efe951923ce79f54 WatchSource:0}: Error finding container 0cc788643c8487a468c5927caf77a8dd20c6e4fdaa93ac30efe951923ce79f54: Status 404 returned error can't find the container with id 0cc788643c8487a468c5927caf77a8dd20c6e4fdaa93ac30efe951923ce79f54 Dec 02 14:08:47 crc kubenswrapper[4625]: I1202 14:08:47.974066 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-z6sn6"] Dec 02 14:08:48 crc kubenswrapper[4625]: I1202 14:08:48.246482 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-s9zxp"] Dec 02 14:08:48 crc kubenswrapper[4625]: I1202 14:08:48.536427 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s9zxp" event={"ID":"6bc01764-a48e-4ca9-b717-f79df542b76e","Type":"ContainerStarted","Data":"1dfafbf446ca993aae561fb06217dd15e27f8e219c9f79f5b0e395dbec4f9ec4"} Dec 02 14:08:48 crc kubenswrapper[4625]: I1202 14:08:48.539520 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" event={"ID":"610d1bad-c2ce-4533-914b-ed46676dc3b8","Type":"ContainerStarted","Data":"0cc788643c8487a468c5927caf77a8dd20c6e4fdaa93ac30efe951923ce79f54"} Dec 02 14:08:48 crc kubenswrapper[4625]: I1202 14:08:48.779234 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="2e108301-d560-49b4-a4b2-a2f45c2fa8fd" containerName="galera" probeResult="failure" output="command timed out" Dec 02 14:08:48 crc kubenswrapper[4625]: I1202 14:08:48.784607 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="2e108301-d560-49b4-a4b2-a2f45c2fa8fd" containerName="galera" probeResult="failure" output="command timed out" Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.272036 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.273959 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.274146 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.275448 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22eacb360cbd64994ad7dde3fa2964df2620c7bf593d351571346615fdf674ec"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.275581 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://22eacb360cbd64994ad7dde3fa2964df2620c7bf593d351571346615fdf674ec" gracePeriod=600 Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.502994 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.510688 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.529344 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.552229 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s9zxp" event={"ID":"6bc01764-a48e-4ca9-b717-f79df542b76e","Type":"ContainerStarted","Data":"083279c60aad88e82f0d585d1e3599eb65265da2e14cba5840fda19e99a18e9f"} Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.554769 4625 generic.go:334] "Generic (PLEG): container finished" podID="610d1bad-c2ce-4533-914b-ed46676dc3b8" containerID="d82c85f40dba2abfc1dec54bab1495ee85ee111e69c8195f5ba945009bf7ef75" exitCode=0 Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.554843 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" event={"ID":"610d1bad-c2ce-4533-914b-ed46676dc3b8","Type":"ContainerDied","Data":"d82c85f40dba2abfc1dec54bab1495ee85ee111e69c8195f5ba945009bf7ef75"} Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.558919 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="22eacb360cbd64994ad7dde3fa2964df2620c7bf593d351571346615fdf674ec" exitCode=0 Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.560592 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"22eacb360cbd64994ad7dde3fa2964df2620c7bf593d351571346615fdf674ec"} Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.560641 4625 scope.go:117] "RemoveContainer" containerID="5271eaf0b8b85861d7c190af249c8999cbc2c292aa3724e0a85121cbb59f2516" Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.617886 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 14:08:49 crc kubenswrapper[4625]: I1202 14:08:49.650619 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-s9zxp" podStartSLOduration=3.650585862 podStartE2EDuration="3.650585862s" podCreationTimestamp="2025-12-02 14:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:08:49.630345437 +0000 UTC m=+1485.592522522" watchObservedRunningTime="2025-12-02 14:08:49.650585862 +0000 UTC m=+1485.612762947" Dec 02 14:08:50 crc kubenswrapper[4625]: I1202 14:08:50.453671 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:50 crc kubenswrapper[4625]: I1202 14:08:50.459350 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerName="nova-api-log" containerID="cri-o://590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771" gracePeriod=30 Dec 02 14:08:50 crc kubenswrapper[4625]: I1202 14:08:50.460455 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerName="nova-api-api" containerID="cri-o://84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043" gracePeriod=30 Dec 02 14:08:50 crc kubenswrapper[4625]: I1202 14:08:50.616982 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03"} Dec 02 14:08:50 crc kubenswrapper[4625]: I1202 14:08:50.627565 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" event={"ID":"610d1bad-c2ce-4533-914b-ed46676dc3b8","Type":"ContainerStarted","Data":"cef341be6da69701ca3ee73d9fe643944a0d76363345a50d8cd1cb03ab585698"} Dec 02 14:08:50 crc kubenswrapper[4625]: I1202 14:08:50.628454 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:50 crc kubenswrapper[4625]: I1202 14:08:50.685385 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" podStartSLOduration=4.685132974 podStartE2EDuration="4.685132974s" podCreationTimestamp="2025-12-02 14:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:08:50.680424072 +0000 UTC m=+1486.642601147" watchObservedRunningTime="2025-12-02 14:08:50.685132974 +0000 UTC m=+1486.647310049" Dec 02 14:08:51 crc kubenswrapper[4625]: I1202 14:08:51.646007 4625 generic.go:334] "Generic (PLEG): container finished" podID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerID="590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771" exitCode=143 Dec 02 14:08:51 crc kubenswrapper[4625]: I1202 14:08:51.646098 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6e2c85-21fe-464c-9954-940e1c3b138b","Type":"ContainerDied","Data":"590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771"} Dec 02 14:08:51 crc kubenswrapper[4625]: I1202 14:08:51.967991 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:51 crc kubenswrapper[4625]: I1202 14:08:51.968804 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="ceilometer-central-agent" containerID="cri-o://29d5866eaae8e13255ebe324687ac12a8b420885c897dd48b4bca6f5321cf172" gracePeriod=30 Dec 02 14:08:51 crc kubenswrapper[4625]: I1202 14:08:51.969551 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="proxy-httpd" containerID="cri-o://b8acbce62a479fb95ab3ccf6f65908407fa68ba4a9234721f65c672359a1aa8f" gracePeriod=30 Dec 02 14:08:51 crc kubenswrapper[4625]: I1202 14:08:51.969846 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="ceilometer-notification-agent" containerID="cri-o://a575e7f61940f08a2599201d4a6d3df5ef80a82150a304a5322c29eb80f59cd5" gracePeriod=30 Dec 02 14:08:51 crc kubenswrapper[4625]: I1202 14:08:51.969911 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="sg-core" containerID="cri-o://226205fc0b8c67e7dfa6a20516836a94602aa971fd5a7dddb0697ab0c1c028eb" gracePeriod=30 Dec 02 14:08:52 crc kubenswrapper[4625]: I1202 14:08:52.001350 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.193:3000/\": read tcp 10.217.0.2:58390->10.217.0.193:3000: read: connection reset by peer" Dec 02 14:08:52 crc kubenswrapper[4625]: I1202 14:08:52.661580 4625 generic.go:334] "Generic (PLEG): container finished" podID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerID="b8acbce62a479fb95ab3ccf6f65908407fa68ba4a9234721f65c672359a1aa8f" exitCode=0 Dec 02 14:08:52 crc kubenswrapper[4625]: I1202 14:08:52.661636 4625 generic.go:334] "Generic (PLEG): container finished" podID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerID="226205fc0b8c67e7dfa6a20516836a94602aa971fd5a7dddb0697ab0c1c028eb" exitCode=2 Dec 02 14:08:52 crc kubenswrapper[4625]: I1202 14:08:52.661643 4625 generic.go:334] "Generic (PLEG): container finished" podID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerID="29d5866eaae8e13255ebe324687ac12a8b420885c897dd48b4bca6f5321cf172" exitCode=0 Dec 02 14:08:52 crc kubenswrapper[4625]: I1202 14:08:52.661702 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcbed4fb-57cf-43b7-9916-d0d5d26dba16","Type":"ContainerDied","Data":"b8acbce62a479fb95ab3ccf6f65908407fa68ba4a9234721f65c672359a1aa8f"} Dec 02 14:08:52 crc kubenswrapper[4625]: I1202 14:08:52.661779 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcbed4fb-57cf-43b7-9916-d0d5d26dba16","Type":"ContainerDied","Data":"226205fc0b8c67e7dfa6a20516836a94602aa971fd5a7dddb0697ab0c1c028eb"} Dec 02 14:08:52 crc kubenswrapper[4625]: I1202 14:08:52.661794 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcbed4fb-57cf-43b7-9916-d0d5d26dba16","Type":"ContainerDied","Data":"29d5866eaae8e13255ebe324687ac12a8b420885c897dd48b4bca6f5321cf172"} Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.166400 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.329988 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-combined-ca-bundle\") pod \"5e6e2c85-21fe-464c-9954-940e1c3b138b\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.330389 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-config-data\") pod \"5e6e2c85-21fe-464c-9954-940e1c3b138b\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.330470 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jbcp\" (UniqueName: \"kubernetes.io/projected/5e6e2c85-21fe-464c-9954-940e1c3b138b-kube-api-access-5jbcp\") pod \"5e6e2c85-21fe-464c-9954-940e1c3b138b\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.330522 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6e2c85-21fe-464c-9954-940e1c3b138b-logs\") pod \"5e6e2c85-21fe-464c-9954-940e1c3b138b\" (UID: \"5e6e2c85-21fe-464c-9954-940e1c3b138b\") " Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.331672 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6e2c85-21fe-464c-9954-940e1c3b138b-logs" (OuterVolumeSpecName: "logs") pod "5e6e2c85-21fe-464c-9954-940e1c3b138b" (UID: "5e6e2c85-21fe-464c-9954-940e1c3b138b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.364764 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6e2c85-21fe-464c-9954-940e1c3b138b-kube-api-access-5jbcp" (OuterVolumeSpecName: "kube-api-access-5jbcp") pod "5e6e2c85-21fe-464c-9954-940e1c3b138b" (UID: "5e6e2c85-21fe-464c-9954-940e1c3b138b"). InnerVolumeSpecName "kube-api-access-5jbcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.370225 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e6e2c85-21fe-464c-9954-940e1c3b138b" (UID: "5e6e2c85-21fe-464c-9954-940e1c3b138b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.402815 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-config-data" (OuterVolumeSpecName: "config-data") pod "5e6e2c85-21fe-464c-9954-940e1c3b138b" (UID: "5e6e2c85-21fe-464c-9954-940e1c3b138b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.439797 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jbcp\" (UniqueName: \"kubernetes.io/projected/5e6e2c85-21fe-464c-9954-940e1c3b138b-kube-api-access-5jbcp\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.439850 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6e2c85-21fe-464c-9954-940e1c3b138b-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.439866 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.439878 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6e2c85-21fe-464c-9954-940e1c3b138b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.685910 4625 generic.go:334] "Generic (PLEG): container finished" podID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerID="84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043" exitCode=0 Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.686017 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6e2c85-21fe-464c-9954-940e1c3b138b","Type":"ContainerDied","Data":"84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043"} Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.686037 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.686107 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6e2c85-21fe-464c-9954-940e1c3b138b","Type":"ContainerDied","Data":"b3312b2deb349b964877bce7acabe259a130d461ccd51d61de57a28bcbc2359b"} Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.686148 4625 scope.go:117] "RemoveContainer" containerID="84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.783027 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.819984 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.820672 4625 scope.go:117] "RemoveContainer" containerID="590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.848033 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:54 crc kubenswrapper[4625]: E1202 14:08:54.848730 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerName="nova-api-api" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.848763 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerName="nova-api-api" Dec 02 14:08:54 crc kubenswrapper[4625]: E1202 14:08:54.848781 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerName="nova-api-log" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.848788 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerName="nova-api-log" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.849061 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerName="nova-api-api" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.849081 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" containerName="nova-api-log" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.850820 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.855800 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.855962 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.856158 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.873687 4625 scope.go:117] "RemoveContainer" containerID="84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043" Dec 02 14:08:54 crc kubenswrapper[4625]: E1202 14:08:54.881692 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043\": container with ID starting with 84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043 not found: ID does not exist" containerID="84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.881815 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043"} err="failed to get container status \"84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043\": rpc error: code = NotFound desc = could not find container \"84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043\": container with ID starting with 84d8172003fabd32d0dc0a2335d28542fa74f137fedb8de82ec7d6f2cfcf7043 not found: ID does not exist" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.881891 4625 scope.go:117] "RemoveContainer" containerID="590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771" Dec 02 14:08:54 crc kubenswrapper[4625]: E1202 14:08:54.882585 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771\": container with ID starting with 590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771 not found: ID does not exist" containerID="590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.882623 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771"} err="failed to get container status \"590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771\": rpc error: code = NotFound desc = could not find container \"590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771\": container with ID starting with 590d43d242b758ad934ef0ee13e290ae82626e94efb1b4ba8954fd0913e01771 not found: ID does not exist" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.900655 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6e2c85-21fe-464c-9954-940e1c3b138b" path="/var/lib/kubelet/pods/5e6e2c85-21fe-464c-9954-940e1c3b138b/volumes" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.902666 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.954357 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.954610 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-config-data\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.954685 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.954920 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff455d7-a2b5-4222-84f2-bce730cd51de-logs\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.955081 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:54 crc kubenswrapper[4625]: I1202 14:08:54.955109 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz7n4\" (UniqueName: \"kubernetes.io/projected/3ff455d7-a2b5-4222-84f2-bce730cd51de-kube-api-access-nz7n4\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.057050 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff455d7-a2b5-4222-84f2-bce730cd51de-logs\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.057187 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.057209 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz7n4\" (UniqueName: \"kubernetes.io/projected/3ff455d7-a2b5-4222-84f2-bce730cd51de-kube-api-access-nz7n4\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.057729 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff455d7-a2b5-4222-84f2-bce730cd51de-logs\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.057827 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.057952 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-config-data\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.057978 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.065130 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-public-tls-certs\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.076383 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.077340 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-config-data\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.078353 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.085968 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz7n4\" (UniqueName: \"kubernetes.io/projected/3ff455d7-a2b5-4222-84f2-bce730cd51de-kube-api-access-nz7n4\") pod \"nova-api-0\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.192587 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.705251 4625 generic.go:334] "Generic (PLEG): container finished" podID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerID="a575e7f61940f08a2599201d4a6d3df5ef80a82150a304a5322c29eb80f59cd5" exitCode=0 Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.705333 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcbed4fb-57cf-43b7-9916-d0d5d26dba16","Type":"ContainerDied","Data":"a575e7f61940f08a2599201d4a6d3df5ef80a82150a304a5322c29eb80f59cd5"} Dec 02 14:08:55 crc kubenswrapper[4625]: I1202 14:08:55.751563 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:08:55 crc kubenswrapper[4625]: W1202 14:08:55.809782 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ff455d7_a2b5_4222_84f2_bce730cd51de.slice/crio-ac7b61276dad3a2ce875b73a5734811209861d382ae41f7384abf8768e14d0e6 WatchSource:0}: Error finding container ac7b61276dad3a2ce875b73a5734811209861d382ae41f7384abf8768e14d0e6: Status 404 returned error can't find the container with id ac7b61276dad3a2ce875b73a5734811209861d382ae41f7384abf8768e14d0e6 Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.536794 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.611613 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-combined-ca-bundle\") pod \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.611788 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-ceilometer-tls-certs\") pod \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.611837 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-log-httpd\") pod \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.611963 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-run-httpd\") pod \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.612028 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-config-data\") pod \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.612060 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-sg-core-conf-yaml\") pod \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.612124 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-scripts\") pod \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.612156 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v456\" (UniqueName: \"kubernetes.io/projected/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-kube-api-access-4v456\") pod \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.618293 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dcbed4fb-57cf-43b7-9916-d0d5d26dba16" (UID: "dcbed4fb-57cf-43b7-9916-d0d5d26dba16"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.618424 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dcbed4fb-57cf-43b7-9916-d0d5d26dba16" (UID: "dcbed4fb-57cf-43b7-9916-d0d5d26dba16"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.622920 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-scripts" (OuterVolumeSpecName: "scripts") pod "dcbed4fb-57cf-43b7-9916-d0d5d26dba16" (UID: "dcbed4fb-57cf-43b7-9916-d0d5d26dba16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.623583 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-kube-api-access-4v456" (OuterVolumeSpecName: "kube-api-access-4v456") pod "dcbed4fb-57cf-43b7-9916-d0d5d26dba16" (UID: "dcbed4fb-57cf-43b7-9916-d0d5d26dba16"). InnerVolumeSpecName "kube-api-access-4v456". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.715580 4625 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.715614 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.715625 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v456\" (UniqueName: \"kubernetes.io/projected/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-kube-api-access-4v456\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.715636 4625 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.723333 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ff455d7-a2b5-4222-84f2-bce730cd51de","Type":"ContainerStarted","Data":"11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9"} Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.723413 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ff455d7-a2b5-4222-84f2-bce730cd51de","Type":"ContainerStarted","Data":"ac7b61276dad3a2ce875b73a5734811209861d382ae41f7384abf8768e14d0e6"} Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.728137 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dcbed4fb-57cf-43b7-9916-d0d5d26dba16","Type":"ContainerDied","Data":"0f3308bc81475022096691753b76b0fcab0fda7477d43fa6bb9380da48ee001b"} Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.728187 4625 scope.go:117] "RemoveContainer" containerID="b8acbce62a479fb95ab3ccf6f65908407fa68ba4a9234721f65c672359a1aa8f" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.728825 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.823945 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dcbed4fb-57cf-43b7-9916-d0d5d26dba16" (UID: "dcbed4fb-57cf-43b7-9916-d0d5d26dba16"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.824517 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-sg-core-conf-yaml\") pod \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\" (UID: \"dcbed4fb-57cf-43b7-9916-d0d5d26dba16\") " Dec 02 14:08:56 crc kubenswrapper[4625]: W1202 14:08:56.824743 4625 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/dcbed4fb-57cf-43b7-9916-d0d5d26dba16/volumes/kubernetes.io~secret/sg-core-conf-yaml Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.824796 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dcbed4fb-57cf-43b7-9916-d0d5d26dba16" (UID: "dcbed4fb-57cf-43b7-9916-d0d5d26dba16"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.825206 4625 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.895484 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "dcbed4fb-57cf-43b7-9916-d0d5d26dba16" (UID: "dcbed4fb-57cf-43b7-9916-d0d5d26dba16"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.920409 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-config-data" (OuterVolumeSpecName: "config-data") pod "dcbed4fb-57cf-43b7-9916-d0d5d26dba16" (UID: "dcbed4fb-57cf-43b7-9916-d0d5d26dba16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.921483 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcbed4fb-57cf-43b7-9916-d0d5d26dba16" (UID: "dcbed4fb-57cf-43b7-9916-d0d5d26dba16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.927185 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.927217 4625 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:56 crc kubenswrapper[4625]: I1202 14:08:56.927237 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbed4fb-57cf-43b7-9916-d0d5d26dba16-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.009432 4625 scope.go:117] "RemoveContainer" containerID="226205fc0b8c67e7dfa6a20516836a94602aa971fd5a7dddb0697ab0c1c028eb" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.041304 4625 scope.go:117] "RemoveContainer" containerID="a575e7f61940f08a2599201d4a6d3df5ef80a82150a304a5322c29eb80f59cd5" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.072265 4625 scope.go:117] "RemoveContainer" containerID="29d5866eaae8e13255ebe324687ac12a8b420885c897dd48b4bca6f5321cf172" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.073249 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.092576 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.226842 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:57 crc kubenswrapper[4625]: E1202 14:08:57.227475 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="ceilometer-notification-agent" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.227503 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="ceilometer-notification-agent" Dec 02 14:08:57 crc kubenswrapper[4625]: E1202 14:08:57.227528 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="sg-core" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.227534 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="sg-core" Dec 02 14:08:57 crc kubenswrapper[4625]: E1202 14:08:57.227559 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="ceilometer-central-agent" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.227565 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="ceilometer-central-agent" Dec 02 14:08:57 crc kubenswrapper[4625]: E1202 14:08:57.227586 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="proxy-httpd" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.227592 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="proxy-httpd" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.227782 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="ceilometer-notification-agent" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.227802 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="proxy-httpd" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.227882 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="sg-core" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.227895 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" containerName="ceilometer-central-agent" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.241980 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.248924 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.249218 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.249410 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.275411 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.275642 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.387860 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.387952 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bd330e7-048e-4237-a165-25f8c3bf6bc3-log-httpd\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.387985 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87wc\" (UniqueName: \"kubernetes.io/projected/3bd330e7-048e-4237-a165-25f8c3bf6bc3-kube-api-access-z87wc\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.388021 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-scripts\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.388070 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.388136 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.388227 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bd330e7-048e-4237-a165-25f8c3bf6bc3-run-httpd\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.388249 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-config-data\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.391439 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sfhvx"] Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.391772 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" podUID="56f84277-4e5d-4772-9da9-f9bd1aa4e637" containerName="dnsmasq-dns" containerID="cri-o://a77f6d0bc91c16d06ef12ff62b61af474b1bf7666d8c61b04b5717812d035268" gracePeriod=10 Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.490650 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-config-data\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.490745 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.490779 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bd330e7-048e-4237-a165-25f8c3bf6bc3-log-httpd\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.490810 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z87wc\" (UniqueName: \"kubernetes.io/projected/3bd330e7-048e-4237-a165-25f8c3bf6bc3-kube-api-access-z87wc\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.490845 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-scripts\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.490887 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.490918 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.490979 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bd330e7-048e-4237-a165-25f8c3bf6bc3-run-httpd\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.491644 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bd330e7-048e-4237-a165-25f8c3bf6bc3-run-httpd\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.494173 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3bd330e7-048e-4237-a165-25f8c3bf6bc3-log-httpd\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.508199 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.509882 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-scripts\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.512323 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.513044 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-config-data\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.521513 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z87wc\" (UniqueName: \"kubernetes.io/projected/3bd330e7-048e-4237-a165-25f8c3bf6bc3-kube-api-access-z87wc\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.539898 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd330e7-048e-4237-a165-25f8c3bf6bc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3bd330e7-048e-4237-a165-25f8c3bf6bc3\") " pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.598194 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.859076 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ff455d7-a2b5-4222-84f2-bce730cd51de","Type":"ContainerStarted","Data":"634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf"} Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.872928 4625 generic.go:334] "Generic (PLEG): container finished" podID="56f84277-4e5d-4772-9da9-f9bd1aa4e637" containerID="a77f6d0bc91c16d06ef12ff62b61af474b1bf7666d8c61b04b5717812d035268" exitCode=0 Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.872987 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" event={"ID":"56f84277-4e5d-4772-9da9-f9bd1aa4e637","Type":"ContainerDied","Data":"a77f6d0bc91c16d06ef12ff62b61af474b1bf7666d8c61b04b5717812d035268"} Dec 02 14:08:57 crc kubenswrapper[4625]: I1202 14:08:57.935974 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.935901263 podStartE2EDuration="3.935901263s" podCreationTimestamp="2025-12-02 14:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:08:57.892009856 +0000 UTC m=+1493.854186931" watchObservedRunningTime="2025-12-02 14:08:57.935901263 +0000 UTC m=+1493.898078338" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.211532 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.314481 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-svc\") pod \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.314711 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-swift-storage-0\") pod \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.314859 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-sb\") pod \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.314947 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-config\") pod \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.315003 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bvb8\" (UniqueName: \"kubernetes.io/projected/56f84277-4e5d-4772-9da9-f9bd1aa4e637-kube-api-access-5bvb8\") pod \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.315130 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-nb\") pod \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\" (UID: \"56f84277-4e5d-4772-9da9-f9bd1aa4e637\") " Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.329795 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f84277-4e5d-4772-9da9-f9bd1aa4e637-kube-api-access-5bvb8" (OuterVolumeSpecName: "kube-api-access-5bvb8") pod "56f84277-4e5d-4772-9da9-f9bd1aa4e637" (UID: "56f84277-4e5d-4772-9da9-f9bd1aa4e637"). InnerVolumeSpecName "kube-api-access-5bvb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.428708 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bvb8\" (UniqueName: \"kubernetes.io/projected/56f84277-4e5d-4772-9da9-f9bd1aa4e637-kube-api-access-5bvb8\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.433413 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56f84277-4e5d-4772-9da9-f9bd1aa4e637" (UID: "56f84277-4e5d-4772-9da9-f9bd1aa4e637"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.464514 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56f84277-4e5d-4772-9da9-f9bd1aa4e637" (UID: "56f84277-4e5d-4772-9da9-f9bd1aa4e637"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.467817 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.487956 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56f84277-4e5d-4772-9da9-f9bd1aa4e637" (UID: "56f84277-4e5d-4772-9da9-f9bd1aa4e637"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.497141 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-config" (OuterVolumeSpecName: "config") pod "56f84277-4e5d-4772-9da9-f9bd1aa4e637" (UID: "56f84277-4e5d-4772-9da9-f9bd1aa4e637"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:08:58 crc kubenswrapper[4625]: W1202 14:08:58.513505 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bd330e7_048e_4237_a165_25f8c3bf6bc3.slice/crio-c4fae1919ac27b9166bd4ee496a2bdefd8c51c7a346c08de81da019ec72cf450 WatchSource:0}: Error finding container c4fae1919ac27b9166bd4ee496a2bdefd8c51c7a346c08de81da019ec72cf450: Status 404 returned error can't find the container with id c4fae1919ac27b9166bd4ee496a2bdefd8c51c7a346c08de81da019ec72cf450 Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.530780 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56f84277-4e5d-4772-9da9-f9bd1aa4e637" (UID: "56f84277-4e5d-4772-9da9-f9bd1aa4e637"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.532895 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.533012 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.533088 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.533149 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.533244 4625 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56f84277-4e5d-4772-9da9-f9bd1aa4e637-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.869124 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbed4fb-57cf-43b7-9916-d0d5d26dba16" path="/var/lib/kubelet/pods/dcbed4fb-57cf-43b7-9916-d0d5d26dba16/volumes" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.894630 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bd330e7-048e-4237-a165-25f8c3bf6bc3","Type":"ContainerStarted","Data":"c4fae1919ac27b9166bd4ee496a2bdefd8c51c7a346c08de81da019ec72cf450"} Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.897026 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" event={"ID":"56f84277-4e5d-4772-9da9-f9bd1aa4e637","Type":"ContainerDied","Data":"fb94520465d8661b61ab469ac30be281ef502942140291510c25515011d75744"} Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.897081 4625 scope.go:117] "RemoveContainer" containerID="a77f6d0bc91c16d06ef12ff62b61af474b1bf7666d8c61b04b5717812d035268" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.897712 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-sfhvx" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.898924 4625 generic.go:334] "Generic (PLEG): container finished" podID="6bc01764-a48e-4ca9-b717-f79df542b76e" containerID="083279c60aad88e82f0d585d1e3599eb65265da2e14cba5840fda19e99a18e9f" exitCode=0 Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.899275 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s9zxp" event={"ID":"6bc01764-a48e-4ca9-b717-f79df542b76e","Type":"ContainerDied","Data":"083279c60aad88e82f0d585d1e3599eb65265da2e14cba5840fda19e99a18e9f"} Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.946587 4625 scope.go:117] "RemoveContainer" containerID="c40783744d8c9d77fa8a47285c0f9b9b3ba2f598f7deb415b74d37f1caaa28c4" Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.957544 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sfhvx"] Dec 02 14:08:58 crc kubenswrapper[4625]: I1202 14:08:58.968468 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-sfhvx"] Dec 02 14:08:59 crc kubenswrapper[4625]: I1202 14:08:59.915134 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bd330e7-048e-4237-a165-25f8c3bf6bc3","Type":"ContainerStarted","Data":"f8e5eebbb8403aec917abc3a17eff37f83ce0e8a02168a111e3ecbd59d3b124a"} Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.393845 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.490147 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6p8k\" (UniqueName: \"kubernetes.io/projected/6bc01764-a48e-4ca9-b717-f79df542b76e-kube-api-access-g6p8k\") pod \"6bc01764-a48e-4ca9-b717-f79df542b76e\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.490658 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-config-data\") pod \"6bc01764-a48e-4ca9-b717-f79df542b76e\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.490824 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-scripts\") pod \"6bc01764-a48e-4ca9-b717-f79df542b76e\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.490936 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-combined-ca-bundle\") pod \"6bc01764-a48e-4ca9-b717-f79df542b76e\" (UID: \"6bc01764-a48e-4ca9-b717-f79df542b76e\") " Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.513619 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc01764-a48e-4ca9-b717-f79df542b76e-kube-api-access-g6p8k" (OuterVolumeSpecName: "kube-api-access-g6p8k") pod "6bc01764-a48e-4ca9-b717-f79df542b76e" (UID: "6bc01764-a48e-4ca9-b717-f79df542b76e"). InnerVolumeSpecName "kube-api-access-g6p8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.522834 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-scripts" (OuterVolumeSpecName: "scripts") pod "6bc01764-a48e-4ca9-b717-f79df542b76e" (UID: "6bc01764-a48e-4ca9-b717-f79df542b76e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.543517 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-config-data" (OuterVolumeSpecName: "config-data") pod "6bc01764-a48e-4ca9-b717-f79df542b76e" (UID: "6bc01764-a48e-4ca9-b717-f79df542b76e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.561333 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bc01764-a48e-4ca9-b717-f79df542b76e" (UID: "6bc01764-a48e-4ca9-b717-f79df542b76e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.593527 4625 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-scripts\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.593568 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.593583 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6p8k\" (UniqueName: \"kubernetes.io/projected/6bc01764-a48e-4ca9-b717-f79df542b76e-kube-api-access-g6p8k\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.593594 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc01764-a48e-4ca9-b717-f79df542b76e-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.876849 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f84277-4e5d-4772-9da9-f9bd1aa4e637" path="/var/lib/kubelet/pods/56f84277-4e5d-4772-9da9-f9bd1aa4e637/volumes" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.934849 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bd330e7-048e-4237-a165-25f8c3bf6bc3","Type":"ContainerStarted","Data":"d040d0a843a63f7afe91a277a4e84461098f61919f46ae59d79b008392d99960"} Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.939136 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-s9zxp" event={"ID":"6bc01764-a48e-4ca9-b717-f79df542b76e","Type":"ContainerDied","Data":"1dfafbf446ca993aae561fb06217dd15e27f8e219c9f79f5b0e395dbec4f9ec4"} Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.939252 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dfafbf446ca993aae561fb06217dd15e27f8e219c9f79f5b0e395dbec4f9ec4" Dec 02 14:09:00 crc kubenswrapper[4625]: I1202 14:09:00.939447 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-s9zxp" Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.626529 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.627919 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3ff455d7-a2b5-4222-84f2-bce730cd51de" containerName="nova-api-log" containerID="cri-o://11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9" gracePeriod=30 Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.629534 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3ff455d7-a2b5-4222-84f2-bce730cd51de" containerName="nova-api-api" containerID="cri-o://634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf" gracePeriod=30 Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.700402 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.700797 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d1153514-f059-41ea-8eb2-e95aac32f061" containerName="nova-scheduler-scheduler" containerID="cri-o://4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57" gracePeriod=30 Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.722419 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.723068 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-log" containerID="cri-o://f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6" gracePeriod=30 Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.723525 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-metadata" containerID="cri-o://6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad" gracePeriod=30 Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.971780 4625 generic.go:334] "Generic (PLEG): container finished" podID="3ff455d7-a2b5-4222-84f2-bce730cd51de" containerID="11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9" exitCode=143 Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.971941 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ff455d7-a2b5-4222-84f2-bce730cd51de","Type":"ContainerDied","Data":"11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9"} Dec 02 14:09:01 crc kubenswrapper[4625]: I1202 14:09:01.997159 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bd330e7-048e-4237-a165-25f8c3bf6bc3","Type":"ContainerStarted","Data":"98a017d38039b674013e6d1ae4bb457b700b7a7dfaaa187657e89faabf7a2002"} Dec 02 14:09:02 crc kubenswrapper[4625]: E1202 14:09:02.464249 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:09:02 crc kubenswrapper[4625]: E1202 14:09:02.467647 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:09:02 crc kubenswrapper[4625]: E1202 14:09:02.470044 4625 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 02 14:09:02 crc kubenswrapper[4625]: E1202 14:09:02.470093 4625 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d1153514-f059-41ea-8eb2-e95aac32f061" containerName="nova-scheduler-scheduler" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.014786 4625 generic.go:334] "Generic (PLEG): container finished" podID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerID="f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6" exitCode=143 Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.014842 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4","Type":"ContainerDied","Data":"f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6"} Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.742715 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.761058 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.879793 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-public-tls-certs\") pod \"3ff455d7-a2b5-4222-84f2-bce730cd51de\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.880415 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-config-data\") pod \"3ff455d7-a2b5-4222-84f2-bce730cd51de\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.880466 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j5sw\" (UniqueName: \"kubernetes.io/projected/d1153514-f059-41ea-8eb2-e95aac32f061-kube-api-access-9j5sw\") pod \"d1153514-f059-41ea-8eb2-e95aac32f061\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.880490 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz7n4\" (UniqueName: \"kubernetes.io/projected/3ff455d7-a2b5-4222-84f2-bce730cd51de-kube-api-access-nz7n4\") pod \"3ff455d7-a2b5-4222-84f2-bce730cd51de\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.880582 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-config-data\") pod \"d1153514-f059-41ea-8eb2-e95aac32f061\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.880731 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff455d7-a2b5-4222-84f2-bce730cd51de-logs\") pod \"3ff455d7-a2b5-4222-84f2-bce730cd51de\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.880784 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-combined-ca-bundle\") pod \"3ff455d7-a2b5-4222-84f2-bce730cd51de\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.880831 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-internal-tls-certs\") pod \"3ff455d7-a2b5-4222-84f2-bce730cd51de\" (UID: \"3ff455d7-a2b5-4222-84f2-bce730cd51de\") " Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.880899 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-combined-ca-bundle\") pod \"d1153514-f059-41ea-8eb2-e95aac32f061\" (UID: \"d1153514-f059-41ea-8eb2-e95aac32f061\") " Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.881376 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff455d7-a2b5-4222-84f2-bce730cd51de-logs" (OuterVolumeSpecName: "logs") pod "3ff455d7-a2b5-4222-84f2-bce730cd51de" (UID: "3ff455d7-a2b5-4222-84f2-bce730cd51de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.881644 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff455d7-a2b5-4222-84f2-bce730cd51de-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.893689 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff455d7-a2b5-4222-84f2-bce730cd51de-kube-api-access-nz7n4" (OuterVolumeSpecName: "kube-api-access-nz7n4") pod "3ff455d7-a2b5-4222-84f2-bce730cd51de" (UID: "3ff455d7-a2b5-4222-84f2-bce730cd51de"). InnerVolumeSpecName "kube-api-access-nz7n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.894470 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1153514-f059-41ea-8eb2-e95aac32f061-kube-api-access-9j5sw" (OuterVolumeSpecName: "kube-api-access-9j5sw") pod "d1153514-f059-41ea-8eb2-e95aac32f061" (UID: "d1153514-f059-41ea-8eb2-e95aac32f061"). InnerVolumeSpecName "kube-api-access-9j5sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.918086 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-config-data" (OuterVolumeSpecName: "config-data") pod "d1153514-f059-41ea-8eb2-e95aac32f061" (UID: "d1153514-f059-41ea-8eb2-e95aac32f061"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.918850 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1153514-f059-41ea-8eb2-e95aac32f061" (UID: "d1153514-f059-41ea-8eb2-e95aac32f061"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.934149 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-config-data" (OuterVolumeSpecName: "config-data") pod "3ff455d7-a2b5-4222-84f2-bce730cd51de" (UID: "3ff455d7-a2b5-4222-84f2-bce730cd51de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.942538 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ff455d7-a2b5-4222-84f2-bce730cd51de" (UID: "3ff455d7-a2b5-4222-84f2-bce730cd51de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.949264 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3ff455d7-a2b5-4222-84f2-bce730cd51de" (UID: "3ff455d7-a2b5-4222-84f2-bce730cd51de"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.960081 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3ff455d7-a2b5-4222-84f2-bce730cd51de" (UID: "3ff455d7-a2b5-4222-84f2-bce730cd51de"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.983609 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.983946 4625 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.984019 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.984089 4625 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.984148 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff455d7-a2b5-4222-84f2-bce730cd51de-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.984215 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j5sw\" (UniqueName: \"kubernetes.io/projected/d1153514-f059-41ea-8eb2-e95aac32f061-kube-api-access-9j5sw\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.984285 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz7n4\" (UniqueName: \"kubernetes.io/projected/3ff455d7-a2b5-4222-84f2-bce730cd51de-kube-api-access-nz7n4\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:03 crc kubenswrapper[4625]: I1202 14:09:03.984374 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1153514-f059-41ea-8eb2-e95aac32f061-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.032503 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3bd330e7-048e-4237-a165-25f8c3bf6bc3","Type":"ContainerStarted","Data":"4f6b239093902ab6b9adeb4a947bec95f4245397c935e936a7cc3d1e360f8a81"} Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.034270 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.035145 4625 generic.go:334] "Generic (PLEG): container finished" podID="d1153514-f059-41ea-8eb2-e95aac32f061" containerID="4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57" exitCode=0 Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.035235 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.035279 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1153514-f059-41ea-8eb2-e95aac32f061","Type":"ContainerDied","Data":"4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57"} Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.035341 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1153514-f059-41ea-8eb2-e95aac32f061","Type":"ContainerDied","Data":"f98f1e1d4bd01bb98494edd1797180bf9d2649839f3f4e81f6e41d0bbadea6c7"} Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.035365 4625 scope.go:117] "RemoveContainer" containerID="4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.040279 4625 generic.go:334] "Generic (PLEG): container finished" podID="3ff455d7-a2b5-4222-84f2-bce730cd51de" containerID="634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf" exitCode=0 Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.040491 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ff455d7-a2b5-4222-84f2-bce730cd51de","Type":"ContainerDied","Data":"634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf"} Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.040540 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3ff455d7-a2b5-4222-84f2-bce730cd51de","Type":"ContainerDied","Data":"ac7b61276dad3a2ce875b73a5734811209861d382ae41f7384abf8768e14d0e6"} Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.040613 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.075185 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.51505665 podStartE2EDuration="7.075153675s" podCreationTimestamp="2025-12-02 14:08:57 +0000 UTC" firstStartedPulling="2025-12-02 14:08:58.519456497 +0000 UTC m=+1494.481633572" lastFinishedPulling="2025-12-02 14:09:03.079553522 +0000 UTC m=+1499.041730597" observedRunningTime="2025-12-02 14:09:04.060095515 +0000 UTC m=+1500.022272610" watchObservedRunningTime="2025-12-02 14:09:04.075153675 +0000 UTC m=+1500.037330750" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.084553 4625 scope.go:117] "RemoveContainer" containerID="4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57" Dec 02 14:09:04 crc kubenswrapper[4625]: E1202 14:09:04.085176 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57\": container with ID starting with 4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57 not found: ID does not exist" containerID="4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.085442 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57"} err="failed to get container status \"4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57\": rpc error: code = NotFound desc = could not find container \"4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57\": container with ID starting with 4ca2a6dae7d079772c4ffa9ea78ff5f04c6be9ba26a3ec24968d8d5e95a60e57 not found: ID does not exist" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.085625 4625 scope.go:117] "RemoveContainer" containerID="634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.132442 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.156529 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.156469 4625 scope.go:117] "RemoveContainer" containerID="11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.183990 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:09:04 crc kubenswrapper[4625]: E1202 14:09:04.184669 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f84277-4e5d-4772-9da9-f9bd1aa4e637" containerName="dnsmasq-dns" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.184690 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f84277-4e5d-4772-9da9-f9bd1aa4e637" containerName="dnsmasq-dns" Dec 02 14:09:04 crc kubenswrapper[4625]: E1202 14:09:04.184700 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f84277-4e5d-4772-9da9-f9bd1aa4e637" containerName="init" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.184707 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f84277-4e5d-4772-9da9-f9bd1aa4e637" containerName="init" Dec 02 14:09:04 crc kubenswrapper[4625]: E1202 14:09:04.184736 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff455d7-a2b5-4222-84f2-bce730cd51de" containerName="nova-api-log" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.184743 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff455d7-a2b5-4222-84f2-bce730cd51de" containerName="nova-api-log" Dec 02 14:09:04 crc kubenswrapper[4625]: E1202 14:09:04.184757 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1153514-f059-41ea-8eb2-e95aac32f061" containerName="nova-scheduler-scheduler" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.184763 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1153514-f059-41ea-8eb2-e95aac32f061" containerName="nova-scheduler-scheduler" Dec 02 14:09:04 crc kubenswrapper[4625]: E1202 14:09:04.184769 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc01764-a48e-4ca9-b717-f79df542b76e" containerName="nova-manage" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.184775 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc01764-a48e-4ca9-b717-f79df542b76e" containerName="nova-manage" Dec 02 14:09:04 crc kubenswrapper[4625]: E1202 14:09:04.184793 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff455d7-a2b5-4222-84f2-bce730cd51de" containerName="nova-api-api" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.184799 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff455d7-a2b5-4222-84f2-bce730cd51de" containerName="nova-api-api" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.185005 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1153514-f059-41ea-8eb2-e95aac32f061" containerName="nova-scheduler-scheduler" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.185027 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f84277-4e5d-4772-9da9-f9bd1aa4e637" containerName="dnsmasq-dns" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.185036 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff455d7-a2b5-4222-84f2-bce730cd51de" containerName="nova-api-api" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.185050 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff455d7-a2b5-4222-84f2-bce730cd51de" containerName="nova-api-log" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.185063 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc01764-a48e-4ca9-b717-f79df542b76e" containerName="nova-manage" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.185864 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.198621 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.215786 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.260541 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.269092 4625 scope.go:117] "RemoveContainer" containerID="634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf" Dec 02 14:09:04 crc kubenswrapper[4625]: E1202 14:09:04.271219 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf\": container with ID starting with 634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf not found: ID does not exist" containerID="634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.271280 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf"} err="failed to get container status \"634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf\": rpc error: code = NotFound desc = could not find container \"634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf\": container with ID starting with 634edc67df119bb77f46fa9bd88442ed750c75d8511c646f8aa1064701ba4eaf not found: ID does not exist" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.271852 4625 scope.go:117] "RemoveContainer" containerID="11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9" Dec 02 14:09:04 crc kubenswrapper[4625]: E1202 14:09:04.272670 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9\": container with ID starting with 11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9 not found: ID does not exist" containerID="11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.272704 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9"} err="failed to get container status \"11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9\": rpc error: code = NotFound desc = could not find container \"11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9\": container with ID starting with 11cfb46e123dbf84dd72ec436b8a7ac519a04c029df30a13ea22d6bf2426d7d9 not found: ID does not exist" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.281597 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.292763 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a\") " pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.293060 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a-config-data\") pod \"nova-scheduler-0\" (UID: \"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a\") " pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.293786 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpr4b\" (UniqueName: \"kubernetes.io/projected/6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a-kube-api-access-hpr4b\") pod \"nova-scheduler-0\" (UID: \"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a\") " pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.300911 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.304550 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.308637 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.308975 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.309397 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.326428 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.397062 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.397156 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs5mw\" (UniqueName: \"kubernetes.io/projected/1800930c-5ef6-4a3e-8a80-df933d636e5b-kube-api-access-qs5mw\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.397214 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.397295 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a\") " pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.397351 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a-config-data\") pod \"nova-scheduler-0\" (UID: \"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a\") " pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.397435 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-public-tls-certs\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.397506 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-config-data\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.397559 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1800930c-5ef6-4a3e-8a80-df933d636e5b-logs\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.397741 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpr4b\" (UniqueName: \"kubernetes.io/projected/6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a-kube-api-access-hpr4b\") pod \"nova-scheduler-0\" (UID: \"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a\") " pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.403114 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a\") " pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.419083 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a-config-data\") pod \"nova-scheduler-0\" (UID: \"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a\") " pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.423071 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpr4b\" (UniqueName: \"kubernetes.io/projected/6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a-kube-api-access-hpr4b\") pod \"nova-scheduler-0\" (UID: \"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a\") " pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.500507 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-public-tls-certs\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.500603 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-config-data\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.500642 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1800930c-5ef6-4a3e-8a80-df933d636e5b-logs\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.500732 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.500768 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs5mw\" (UniqueName: \"kubernetes.io/projected/1800930c-5ef6-4a3e-8a80-df933d636e5b-kube-api-access-qs5mw\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.500794 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.502215 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1800930c-5ef6-4a3e-8a80-df933d636e5b-logs\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.507373 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-config-data\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.508553 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.509946 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-public-tls-certs\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.510466 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1800930c-5ef6-4a3e-8a80-df933d636e5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.528173 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs5mw\" (UniqueName: \"kubernetes.io/projected/1800930c-5ef6-4a3e-8a80-df933d636e5b-kube-api-access-qs5mw\") pod \"nova-api-0\" (UID: \"1800930c-5ef6-4a3e-8a80-df933d636e5b\") " pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.543159 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.642552 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.890282 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff455d7-a2b5-4222-84f2-bce730cd51de" path="/var/lib/kubelet/pods/3ff455d7-a2b5-4222-84f2-bce730cd51de/volumes" Dec 02 14:09:04 crc kubenswrapper[4625]: I1202 14:09:04.890981 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1153514-f059-41ea-8eb2-e95aac32f061" path="/var/lib/kubelet/pods/d1153514-f059-41ea-8eb2-e95aac32f061/volumes" Dec 02 14:09:05 crc kubenswrapper[4625]: I1202 14:09:05.111686 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 02 14:09:05 crc kubenswrapper[4625]: I1202 14:09:05.194712 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 02 14:09:05 crc kubenswrapper[4625]: I1202 14:09:05.594875 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:48420->10.217.0.195:8775: read: connection reset by peer" Dec 02 14:09:05 crc kubenswrapper[4625]: I1202 14:09:05.595349 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:48430->10.217.0.195:8775: read: connection reset by peer" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.030753 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.103053 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a","Type":"ContainerStarted","Data":"e0e4afeb1303451eb789b35f50e62e0ebec18e365eff1cf7e98a1aa8e786df27"} Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.103658 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a","Type":"ContainerStarted","Data":"c41dcfbf2eb0101322a790ce340dcec83b6627a6331724f6c81ef7f81bebb1d9"} Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.119034 4625 generic.go:334] "Generic (PLEG): container finished" podID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerID="6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad" exitCode=0 Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.119140 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.119196 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4","Type":"ContainerDied","Data":"6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad"} Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.122551 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4","Type":"ContainerDied","Data":"04a54c46b4dc243456da4b3c85bb807ec3893d52872e2a2b6f3ff1069ec7adb1"} Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.122643 4625 scope.go:117] "RemoveContainer" containerID="6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.120648 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-config-data\") pod \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.123014 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-logs\") pod \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.123137 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-combined-ca-bundle\") pod \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.123369 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-nova-metadata-tls-certs\") pod \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.123478 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9fcr\" (UniqueName: \"kubernetes.io/projected/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-kube-api-access-j9fcr\") pod \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\" (UID: \"cf8e46e3-4654-4ce2-8d92-c75ac32c67f4\") " Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.135821 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-logs" (OuterVolumeSpecName: "logs") pod "cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" (UID: "cf8e46e3-4654-4ce2-8d92-c75ac32c67f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.164925 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1800930c-5ef6-4a3e-8a80-df933d636e5b","Type":"ContainerStarted","Data":"df7e686d026285c1d619918866d768921f6d12a26ea848fa0729d4a9ee427c95"} Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.164999 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1800930c-5ef6-4a3e-8a80-df933d636e5b","Type":"ContainerStarted","Data":"22c7b7dccab27752113894375d2c6f95ee88ebbdbe0ca3f55431278ddd106d2b"} Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.165013 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1800930c-5ef6-4a3e-8a80-df933d636e5b","Type":"ContainerStarted","Data":"70a847e54b75795646dc636de7d9818da3e13f531715a4500f7279e2dd0693d8"} Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.186990 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-kube-api-access-j9fcr" (OuterVolumeSpecName: "kube-api-access-j9fcr") pod "cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" (UID: "cf8e46e3-4654-4ce2-8d92-c75ac32c67f4"). InnerVolumeSpecName "kube-api-access-j9fcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.206481 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-config-data" (OuterVolumeSpecName: "config-data") pod "cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" (UID: "cf8e46e3-4654-4ce2-8d92-c75ac32c67f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.225593 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.225559888 podStartE2EDuration="2.225559888s" podCreationTimestamp="2025-12-02 14:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:09:06.145946545 +0000 UTC m=+1502.108123620" watchObservedRunningTime="2025-12-02 14:09:06.225559888 +0000 UTC m=+1502.187736963" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.228259 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.228239068 podStartE2EDuration="2.228239068s" podCreationTimestamp="2025-12-02 14:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:09:06.206206927 +0000 UTC m=+1502.168384022" watchObservedRunningTime="2025-12-02 14:09:06.228239068 +0000 UTC m=+1502.190416143" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.250940 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.251084 4625 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-logs\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.251102 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9fcr\" (UniqueName: \"kubernetes.io/projected/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-kube-api-access-j9fcr\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.282328 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" (UID: "cf8e46e3-4654-4ce2-8d92-c75ac32c67f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.299570 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" (UID: "cf8e46e3-4654-4ce2-8d92-c75ac32c67f4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.353516 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.353556 4625 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.452169 4625 scope.go:117] "RemoveContainer" containerID="f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.507818 4625 scope.go:117] "RemoveContainer" containerID="6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad" Dec 02 14:09:06 crc kubenswrapper[4625]: E1202 14:09:06.523327 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad\": container with ID starting with 6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad not found: ID does not exist" containerID="6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.523383 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad"} err="failed to get container status \"6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad\": rpc error: code = NotFound desc = could not find container \"6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad\": container with ID starting with 6de736b56f21b5c92bf40dd91eef71b9dcd47a539ab9b59f3478f94d5f49b9ad not found: ID does not exist" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.523420 4625 scope.go:117] "RemoveContainer" containerID="f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6" Dec 02 14:09:06 crc kubenswrapper[4625]: E1202 14:09:06.531044 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6\": container with ID starting with f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6 not found: ID does not exist" containerID="f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.531109 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6"} err="failed to get container status \"f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6\": rpc error: code = NotFound desc = could not find container \"f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6\": container with ID starting with f54411480e7ac220faed5de9a4145a90f2432f322dd4913fa3087dadfd0c78c6 not found: ID does not exist" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.563598 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.594223 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.606233 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:09:06 crc kubenswrapper[4625]: E1202 14:09:06.606964 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-log" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.606983 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-log" Dec 02 14:09:06 crc kubenswrapper[4625]: E1202 14:09:06.607008 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-metadata" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.607015 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-metadata" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.607227 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-metadata" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.607239 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" containerName="nova-metadata-log" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.608391 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.618044 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.618130 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.633790 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.768906 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0994edfd-9799-4974-8f7e-eb4cf312a370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.769369 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0994edfd-9799-4974-8f7e-eb4cf312a370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.769415 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq9gv\" (UniqueName: \"kubernetes.io/projected/0994edfd-9799-4974-8f7e-eb4cf312a370-kube-api-access-rq9gv\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.769503 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0994edfd-9799-4974-8f7e-eb4cf312a370-config-data\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.769550 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0994edfd-9799-4974-8f7e-eb4cf312a370-logs\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.871162 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0994edfd-9799-4974-8f7e-eb4cf312a370-config-data\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.871233 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0994edfd-9799-4974-8f7e-eb4cf312a370-logs\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.871272 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0994edfd-9799-4974-8f7e-eb4cf312a370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.871350 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0994edfd-9799-4974-8f7e-eb4cf312a370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.871386 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq9gv\" (UniqueName: \"kubernetes.io/projected/0994edfd-9799-4974-8f7e-eb4cf312a370-kube-api-access-rq9gv\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.917712 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf8e46e3-4654-4ce2-8d92-c75ac32c67f4" path="/var/lib/kubelet/pods/cf8e46e3-4654-4ce2-8d92-c75ac32c67f4/volumes" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.948657 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0994edfd-9799-4974-8f7e-eb4cf312a370-logs\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.949805 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq9gv\" (UniqueName: \"kubernetes.io/projected/0994edfd-9799-4974-8f7e-eb4cf312a370-kube-api-access-rq9gv\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.950732 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0994edfd-9799-4974-8f7e-eb4cf312a370-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.954093 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0994edfd-9799-4974-8f7e-eb4cf312a370-config-data\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:06 crc kubenswrapper[4625]: I1202 14:09:06.973061 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0994edfd-9799-4974-8f7e-eb4cf312a370-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0994edfd-9799-4974-8f7e-eb4cf312a370\") " pod="openstack/nova-metadata-0" Dec 02 14:09:07 crc kubenswrapper[4625]: I1202 14:09:07.235119 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 02 14:09:07 crc kubenswrapper[4625]: I1202 14:09:07.777863 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 02 14:09:08 crc kubenswrapper[4625]: I1202 14:09:08.195673 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0994edfd-9799-4974-8f7e-eb4cf312a370","Type":"ContainerStarted","Data":"8550d88d4c400b4cdab7415d58fec23935458bdb25711fb4d1672dc3404dfe0e"} Dec 02 14:09:08 crc kubenswrapper[4625]: I1202 14:09:08.196078 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0994edfd-9799-4974-8f7e-eb4cf312a370","Type":"ContainerStarted","Data":"52b4586016beee5d705749dbd53e7d810637f2dff9a32d73a7dc85eedfcd5a56"} Dec 02 14:09:09 crc kubenswrapper[4625]: I1202 14:09:09.211762 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0994edfd-9799-4974-8f7e-eb4cf312a370","Type":"ContainerStarted","Data":"eaf4d2b35d5f86d20cb78c4b4a80421da504253de2eb0a1adf46b924a54cf92e"} Dec 02 14:09:09 crc kubenswrapper[4625]: I1202 14:09:09.255123 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.255089874 podStartE2EDuration="3.255089874s" podCreationTimestamp="2025-12-02 14:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:09:09.244852279 +0000 UTC m=+1505.207029354" watchObservedRunningTime="2025-12-02 14:09:09.255089874 +0000 UTC m=+1505.217266959" Dec 02 14:09:09 crc kubenswrapper[4625]: I1202 14:09:09.543616 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 02 14:09:12 crc kubenswrapper[4625]: I1202 14:09:12.236776 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:09:12 crc kubenswrapper[4625]: I1202 14:09:12.237383 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 02 14:09:14 crc kubenswrapper[4625]: I1202 14:09:14.544394 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 02 14:09:14 crc kubenswrapper[4625]: I1202 14:09:14.576535 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 02 14:09:14 crc kubenswrapper[4625]: I1202 14:09:14.645177 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:09:14 crc kubenswrapper[4625]: I1202 14:09:14.645249 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 02 14:09:15 crc kubenswrapper[4625]: I1202 14:09:15.492684 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 02 14:09:15 crc kubenswrapper[4625]: I1202 14:09:15.657574 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1800930c-5ef6-4a3e-8a80-df933d636e5b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:09:15 crc kubenswrapper[4625]: I1202 14:09:15.657657 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1800930c-5ef6-4a3e-8a80-df933d636e5b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:09:17 crc kubenswrapper[4625]: I1202 14:09:17.236645 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 14:09:17 crc kubenswrapper[4625]: I1202 14:09:17.236716 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 02 14:09:18 crc kubenswrapper[4625]: I1202 14:09:18.252660 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0994edfd-9799-4974-8f7e-eb4cf312a370" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:09:18 crc kubenswrapper[4625]: I1202 14:09:18.252681 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0994edfd-9799-4974-8f7e-eb4cf312a370" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 02 14:09:24 crc kubenswrapper[4625]: I1202 14:09:24.657010 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 14:09:24 crc kubenswrapper[4625]: I1202 14:09:24.658005 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 02 14:09:24 crc kubenswrapper[4625]: I1202 14:09:24.658634 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 14:09:24 crc kubenswrapper[4625]: I1202 14:09:24.658705 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 02 14:09:24 crc kubenswrapper[4625]: I1202 14:09:24.667324 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 14:09:24 crc kubenswrapper[4625]: I1202 14:09:24.669744 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 02 14:09:27 crc kubenswrapper[4625]: I1202 14:09:27.243359 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 14:09:27 crc kubenswrapper[4625]: I1202 14:09:27.244299 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 02 14:09:27 crc kubenswrapper[4625]: I1202 14:09:27.251204 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 14:09:27 crc kubenswrapper[4625]: I1202 14:09:27.607411 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 02 14:09:27 crc kubenswrapper[4625]: I1202 14:09:27.615519 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 02 14:09:39 crc kubenswrapper[4625]: I1202 14:09:39.601793 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:09:41 crc kubenswrapper[4625]: I1202 14:09:41.008042 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:09:45 crc kubenswrapper[4625]: I1202 14:09:45.400908 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" containerName="rabbitmq" containerID="cri-o://391c3655b26c148b6f2b79ad817679e5e3bccbb9beaed46211e837d28f4c8907" gracePeriod=604795 Dec 02 14:09:45 crc kubenswrapper[4625]: I1202 14:09:45.670571 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 02 14:09:46 crc kubenswrapper[4625]: I1202 14:09:46.899249 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" containerName="rabbitmq" containerID="cri-o://47c943a6cbaa463a5ed3297531df1fac01775ca05bc4922c59f86d9b19daf748" gracePeriod=604795 Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.822889 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-j5jqf"] Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.828061 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.836281 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.930463 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.930579 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k2ft\" (UniqueName: \"kubernetes.io/projected/fdebccf3-800b-4f0d-8058-eb9608d05a2e-kube-api-access-4k2ft\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.930633 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.930685 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.930731 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.930810 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-config\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.930837 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:51 crc kubenswrapper[4625]: I1202 14:09:51.981414 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-j5jqf"] Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.037211 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.037295 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k2ft\" (UniqueName: \"kubernetes.io/projected/fdebccf3-800b-4f0d-8058-eb9608d05a2e-kube-api-access-4k2ft\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.037350 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.037382 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.037429 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.037488 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-config\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.037511 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.038601 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.038620 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.039274 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.040188 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.046357 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-config\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.057089 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.088908 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k2ft\" (UniqueName: \"kubernetes.io/projected/fdebccf3-800b-4f0d-8058-eb9608d05a2e-kube-api-access-4k2ft\") pod \"dnsmasq-dns-79bd4cc8c9-j5jqf\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.171288 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.190399 4625 generic.go:334] "Generic (PLEG): container finished" podID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" containerID="391c3655b26c148b6f2b79ad817679e5e3bccbb9beaed46211e837d28f4c8907" exitCode=0 Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.190462 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5","Type":"ContainerDied","Data":"391c3655b26c148b6f2b79ad817679e5e3bccbb9beaed46211e837d28f4c8907"} Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.314621 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449452 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-erlang-cookie\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449542 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-erlang-cookie-secret\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449587 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-plugins-conf\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449620 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-confd\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449684 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtqvm\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-kube-api-access-xtqvm\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449727 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-pod-info\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449755 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-server-conf\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449785 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449828 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-tls\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449851 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-plugins\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.449880 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-config-data\") pod \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\" (UID: \"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5\") " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.452451 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.453037 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.455494 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.473282 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-pod-info" (OuterVolumeSpecName: "pod-info") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.473662 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-kube-api-access-xtqvm" (OuterVolumeSpecName: "kube-api-access-xtqvm") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "kube-api-access-xtqvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.473769 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.500215 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.503250 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.547931 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-config-data" (OuterVolumeSpecName: "config-data") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.555964 4625 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.556289 4625 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.556419 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.556535 4625 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.556659 4625 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.556786 4625 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.556896 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtqvm\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-kube-api-access-xtqvm\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.557053 4625 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-pod-info\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.565583 4625 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.597737 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-server-conf" (OuterVolumeSpecName: "server-conf") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.690554 4625 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.707389 4625 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-server-conf\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.707456 4625 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.850473 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" (UID: "1ab3c28f-42ae-43ae-a6d7-10460f3da4c5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.919908 4625 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:52 crc kubenswrapper[4625]: I1202 14:09:52.999923 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-j5jqf"] Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.235878 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" event={"ID":"fdebccf3-800b-4f0d-8058-eb9608d05a2e","Type":"ContainerStarted","Data":"2158b025de2cb0266f09824048064584a6233049f894a21a07a301fff57e1cec"} Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.239407 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1ab3c28f-42ae-43ae-a6d7-10460f3da4c5","Type":"ContainerDied","Data":"2dc4f5997d765ffc327d36b5e0cc62e8bc9f6a45d2c7c9709e688a5705ccc57e"} Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.239446 4625 scope.go:117] "RemoveContainer" containerID="391c3655b26c148b6f2b79ad817679e5e3bccbb9beaed46211e837d28f4c8907" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.239613 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.255881 4625 generic.go:334] "Generic (PLEG): container finished" podID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" containerID="47c943a6cbaa463a5ed3297531df1fac01775ca05bc4922c59f86d9b19daf748" exitCode=0 Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.255929 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a251393-cf48-4d79-8e8d-b46d5e3c664b","Type":"ContainerDied","Data":"47c943a6cbaa463a5ed3297531df1fac01775ca05bc4922c59f86d9b19daf748"} Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.295533 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.314395 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.321620 4625 scope.go:117] "RemoveContainer" containerID="2b13cc239360571cb4d7c4f23f1286d1c18c0922b089c9d5b841f4446361788a" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.403963 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:09:53 crc kubenswrapper[4625]: E1202 14:09:53.404484 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" containerName="setup-container" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.404498 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" containerName="setup-container" Dec 02 14:09:53 crc kubenswrapper[4625]: E1202 14:09:53.404520 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" containerName="rabbitmq" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.404526 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" containerName="rabbitmq" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.404765 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" containerName="rabbitmq" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.405916 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.414993 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-h7kl5" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.415421 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.415485 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.424538 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.424862 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.425013 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.425208 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.437024 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543557 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543620 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5eb1d307-4690-436e-8f82-a27eff014c84-config-data\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543649 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5eb1d307-4690-436e-8f82-a27eff014c84-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543705 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5eb1d307-4690-436e-8f82-a27eff014c84-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543726 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543742 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5eb1d307-4690-436e-8f82-a27eff014c84-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543761 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmwq\" (UniqueName: \"kubernetes.io/projected/5eb1d307-4690-436e-8f82-a27eff014c84-kube-api-access-mfmwq\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543790 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5eb1d307-4690-436e-8f82-a27eff014c84-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543867 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543917 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.543969 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.648483 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.649281 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.649442 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.649818 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5eb1d307-4690-436e-8f82-a27eff014c84-config-data\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.649922 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5eb1d307-4690-436e-8f82-a27eff014c84-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.650024 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5eb1d307-4690-436e-8f82-a27eff014c84-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.650116 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.650202 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5eb1d307-4690-436e-8f82-a27eff014c84-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.650303 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfmwq\" (UniqueName: \"kubernetes.io/projected/5eb1d307-4690-436e-8f82-a27eff014c84-kube-api-access-mfmwq\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.650418 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5eb1d307-4690-436e-8f82-a27eff014c84-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.650621 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.649762 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.650747 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.651569 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5eb1d307-4690-436e-8f82-a27eff014c84-config-data\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.651871 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5eb1d307-4690-436e-8f82-a27eff014c84-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.652062 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5eb1d307-4690-436e-8f82-a27eff014c84-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.654146 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.660359 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5eb1d307-4690-436e-8f82-a27eff014c84-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.661251 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.669821 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5eb1d307-4690-436e-8f82-a27eff014c84-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.687961 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5eb1d307-4690-436e-8f82-a27eff014c84-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.702783 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfmwq\" (UniqueName: \"kubernetes.io/projected/5eb1d307-4690-436e-8f82-a27eff014c84-kube-api-access-mfmwq\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.722593 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"5eb1d307-4690-436e-8f82-a27eff014c84\") " pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.754655 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 02 14:09:53 crc kubenswrapper[4625]: I1202 14:09:53.963734 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.070808 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-confd\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.070949 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a251393-cf48-4d79-8e8d-b46d5e3c664b-erlang-cookie-secret\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.070978 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-plugins\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.071003 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rbm2\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-kube-api-access-6rbm2\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.071155 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.071187 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-tls\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.071273 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-erlang-cookie\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.071372 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-config-data\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.071419 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a251393-cf48-4d79-8e8d-b46d5e3c664b-pod-info\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.071473 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-server-conf\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.071507 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-plugins-conf\") pod \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\" (UID: \"5a251393-cf48-4d79-8e8d-b46d5e3c664b\") " Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.072562 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.072639 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:09:54 crc kubenswrapper[4625]: I1202 14:09:54.073791 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.110391 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5a251393-cf48-4d79-8e8d-b46d5e3c664b-pod-info" (OuterVolumeSpecName: "pod-info") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.110483 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.117454 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-kube-api-access-6rbm2" (OuterVolumeSpecName: "kube-api-access-6rbm2") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "kube-api-access-6rbm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.117823 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a251393-cf48-4d79-8e8d-b46d5e3c664b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.135857 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.189846 4625 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.190229 4625 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.190246 4625 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.190257 4625 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5a251393-cf48-4d79-8e8d-b46d5e3c664b-pod-info\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.190648 4625 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.190665 4625 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5a251393-cf48-4d79-8e8d-b46d5e3c664b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.190674 4625 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.190684 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rbm2\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-kube-api-access-6rbm2\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.192147 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-config-data" (OuterVolumeSpecName: "config-data") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.253674 4625 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.274828 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-server-conf" (OuterVolumeSpecName: "server-conf") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.281816 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5a251393-cf48-4d79-8e8d-b46d5e3c664b","Type":"ContainerDied","Data":"a5c460a5b9b90ec61ba841009c1da3d99109a74de57744516da94ba3ee283665"} Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.281901 4625 scope.go:117] "RemoveContainer" containerID="47c943a6cbaa463a5ed3297531df1fac01775ca05bc4922c59f86d9b19daf748" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.282111 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.289549 4625 generic.go:334] "Generic (PLEG): container finished" podID="fdebccf3-800b-4f0d-8058-eb9608d05a2e" containerID="a41ab8cb7c2ffd0c4a687a6224d5ed95612d0f8a21ba32f3a5fedad5376a6a04" exitCode=0 Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.289611 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" event={"ID":"fdebccf3-800b-4f0d-8058-eb9608d05a2e","Type":"ContainerDied","Data":"a41ab8cb7c2ffd0c4a687a6224d5ed95612d0f8a21ba32f3a5fedad5376a6a04"} Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.290971 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5a251393-cf48-4d79-8e8d-b46d5e3c664b" (UID: "5a251393-cf48-4d79-8e8d-b46d5e3c664b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.301171 4625 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-server-conf\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.302297 4625 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5a251393-cf48-4d79-8e8d-b46d5e3c664b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.302464 4625 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.302512 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a251393-cf48-4d79-8e8d-b46d5e3c664b-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.337404 4625 scope.go:117] "RemoveContainer" containerID="fab8eea7cfc9538032913f923cb15e255e6fdc6b7685be897462dd50245e0a2c" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.427355 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.670283 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.785128 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.841475 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:09:55 crc kubenswrapper[4625]: E1202 14:09:54.845987 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" containerName="setup-container" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.846010 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" containerName="setup-container" Dec 02 14:09:55 crc kubenswrapper[4625]: E1202 14:09:54.846091 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" containerName="rabbitmq" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.846101 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" containerName="rabbitmq" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.846987 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" containerName="rabbitmq" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.884745 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.887339 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.888061 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.888533 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-td5w5" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.888821 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.891723 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.904794 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.905029 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.925582 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ab3c28f-42ae-43ae-a6d7-10460f3da4c5" path="/var/lib/kubelet/pods/1ab3c28f-42ae-43ae-a6d7-10460f3da4c5/volumes" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.926584 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a251393-cf48-4d79-8e8d-b46d5e3c664b" path="/var/lib/kubelet/pods/5a251393-cf48-4d79-8e8d-b46d5e3c664b/volumes" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:54.929046 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056156 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056234 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ba9ca8-e722-4c48-9435-a358d35a893e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056277 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056304 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056341 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ba9ca8-e722-4c48-9435-a358d35a893e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056382 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056426 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ba9ca8-e722-4c48-9435-a358d35a893e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056455 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ba9ca8-e722-4c48-9435-a358d35a893e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056474 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ba9ca8-e722-4c48-9435-a358d35a893e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056500 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrxz9\" (UniqueName: \"kubernetes.io/projected/50ba9ca8-e722-4c48-9435-a358d35a893e-kube-api-access-jrxz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.056582 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.158143 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.158257 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ba9ca8-e722-4c48-9435-a358d35a893e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.158292 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ba9ca8-e722-4c48-9435-a358d35a893e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.158454 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ba9ca8-e722-4c48-9435-a358d35a893e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.158488 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrxz9\" (UniqueName: \"kubernetes.io/projected/50ba9ca8-e722-4c48-9435-a358d35a893e-kube-api-access-jrxz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.158764 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.159652 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.159722 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.159745 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ba9ca8-e722-4c48-9435-a358d35a893e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.159778 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.159801 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.159819 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ba9ca8-e722-4c48-9435-a358d35a893e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.160096 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.160591 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50ba9ca8-e722-4c48-9435-a358d35a893e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.161122 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50ba9ca8-e722-4c48-9435-a358d35a893e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.161165 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.161740 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50ba9ca8-e722-4c48-9435-a358d35a893e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.170879 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50ba9ca8-e722-4c48-9435-a358d35a893e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.180368 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50ba9ca8-e722-4c48-9435-a358d35a893e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.181223 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.203273 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrxz9\" (UniqueName: \"kubernetes.io/projected/50ba9ca8-e722-4c48-9435-a358d35a893e-kube-api-access-jrxz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.205502 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50ba9ca8-e722-4c48-9435-a358d35a893e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.236885 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50ba9ca8-e722-4c48-9435-a358d35a893e\") " pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.306194 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5eb1d307-4690-436e-8f82-a27eff014c84","Type":"ContainerStarted","Data":"088fa5fb504e59f64a0fe4e2bb164d56a60d5d838dde6769d77ac4bb7faff6af"} Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.312458 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" event={"ID":"fdebccf3-800b-4f0d-8058-eb9608d05a2e","Type":"ContainerStarted","Data":"acdbb2c0f1e52e4882ccdc4b4a4b5fbd2a4a4d758a4cf2681f66f20ea8948a52"} Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.312866 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.348590 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" podStartSLOduration=4.348561975 podStartE2EDuration="4.348561975s" podCreationTimestamp="2025-12-02 14:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:09:55.33758 +0000 UTC m=+1551.299757095" watchObservedRunningTime="2025-12-02 14:09:55.348561975 +0000 UTC m=+1551.310739050" Dec 02 14:09:55 crc kubenswrapper[4625]: I1202 14:09:55.540375 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:09:56 crc kubenswrapper[4625]: I1202 14:09:56.248688 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 02 14:09:56 crc kubenswrapper[4625]: W1202 14:09:56.263655 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50ba9ca8_e722_4c48_9435_a358d35a893e.slice/crio-9699c9617681225793b45aaf4c9fe23cab392289b0654ea557ed5abb5d5a9a71 WatchSource:0}: Error finding container 9699c9617681225793b45aaf4c9fe23cab392289b0654ea557ed5abb5d5a9a71: Status 404 returned error can't find the container with id 9699c9617681225793b45aaf4c9fe23cab392289b0654ea557ed5abb5d5a9a71 Dec 02 14:09:56 crc kubenswrapper[4625]: I1202 14:09:56.363764 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ba9ca8-e722-4c48-9435-a358d35a893e","Type":"ContainerStarted","Data":"9699c9617681225793b45aaf4c9fe23cab392289b0654ea557ed5abb5d5a9a71"} Dec 02 14:09:57 crc kubenswrapper[4625]: I1202 14:09:57.378826 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5eb1d307-4690-436e-8f82-a27eff014c84","Type":"ContainerStarted","Data":"2b3067ff9dda4c6ab551f8c410076bb694956cbfb196184213b76583c012def0"} Dec 02 14:09:58 crc kubenswrapper[4625]: I1202 14:09:58.391264 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ba9ca8-e722-4c48-9435-a358d35a893e","Type":"ContainerStarted","Data":"d3d00ddb215839075fcf2a7caf835732d5be34e02372839c1708f5f007f317ca"} Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.173643 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.262791 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-z6sn6"] Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.263163 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" podUID="610d1bad-c2ce-4533-914b-ed46676dc3b8" containerName="dnsmasq-dns" containerID="cri-o://cef341be6da69701ca3ee73d9fe643944a0d76363345a50d8cd1cb03ab585698" gracePeriod=10 Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.454256 4625 generic.go:334] "Generic (PLEG): container finished" podID="610d1bad-c2ce-4533-914b-ed46676dc3b8" containerID="cef341be6da69701ca3ee73d9fe643944a0d76363345a50d8cd1cb03ab585698" exitCode=0 Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.454358 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" event={"ID":"610d1bad-c2ce-4533-914b-ed46676dc3b8","Type":"ContainerDied","Data":"cef341be6da69701ca3ee73d9fe643944a0d76363345a50d8cd1cb03ab585698"} Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.548482 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54ffdb7d8c-skjm6"] Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.554274 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.585847 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54ffdb7d8c-skjm6"] Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.658490 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-dns-svc\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.658606 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-openstack-edpm-ipam\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.658752 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-ovsdbserver-nb\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.658779 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-dns-swift-storage-0\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.658826 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2pl5\" (UniqueName: \"kubernetes.io/projected/5908f5de-9af5-4cde-abf8-5959a6c8648e-kube-api-access-j2pl5\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.658858 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-ovsdbserver-sb\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.658945 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-config\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.762092 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-config\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.762655 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-dns-svc\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.762730 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-openstack-edpm-ipam\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.762845 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-ovsdbserver-nb\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.762874 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-dns-swift-storage-0\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.762901 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2pl5\" (UniqueName: \"kubernetes.io/projected/5908f5de-9af5-4cde-abf8-5959a6c8648e-kube-api-access-j2pl5\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.762925 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-ovsdbserver-sb\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.764325 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-dns-svc\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.764324 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-openstack-edpm-ipam\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.764323 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-config\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.764470 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-ovsdbserver-nb\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.764650 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-ovsdbserver-sb\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.764908 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5908f5de-9af5-4cde-abf8-5959a6c8648e-dns-swift-storage-0\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.788958 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2pl5\" (UniqueName: \"kubernetes.io/projected/5908f5de-9af5-4cde-abf8-5959a6c8648e-kube-api-access-j2pl5\") pod \"dnsmasq-dns-54ffdb7d8c-skjm6\" (UID: \"5908f5de-9af5-4cde-abf8-5959a6c8648e\") " pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.881904 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:02 crc kubenswrapper[4625]: I1202 14:10:02.950753 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.091988 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-swift-storage-0\") pod \"610d1bad-c2ce-4533-914b-ed46676dc3b8\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.092504 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-svc\") pod \"610d1bad-c2ce-4533-914b-ed46676dc3b8\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.092564 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-nb\") pod \"610d1bad-c2ce-4533-914b-ed46676dc3b8\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.092589 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwg88\" (UniqueName: \"kubernetes.io/projected/610d1bad-c2ce-4533-914b-ed46676dc3b8-kube-api-access-kwg88\") pod \"610d1bad-c2ce-4533-914b-ed46676dc3b8\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.092622 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-sb\") pod \"610d1bad-c2ce-4533-914b-ed46676dc3b8\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.092904 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-config\") pod \"610d1bad-c2ce-4533-914b-ed46676dc3b8\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.136516 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/610d1bad-c2ce-4533-914b-ed46676dc3b8-kube-api-access-kwg88" (OuterVolumeSpecName: "kube-api-access-kwg88") pod "610d1bad-c2ce-4533-914b-ed46676dc3b8" (UID: "610d1bad-c2ce-4533-914b-ed46676dc3b8"). InnerVolumeSpecName "kube-api-access-kwg88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.196793 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "610d1bad-c2ce-4533-914b-ed46676dc3b8" (UID: "610d1bad-c2ce-4533-914b-ed46676dc3b8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.197791 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-swift-storage-0\") pod \"610d1bad-c2ce-4533-914b-ed46676dc3b8\" (UID: \"610d1bad-c2ce-4533-914b-ed46676dc3b8\") " Dec 02 14:10:03 crc kubenswrapper[4625]: W1202 14:10:03.197996 4625 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/610d1bad-c2ce-4533-914b-ed46676dc3b8/volumes/kubernetes.io~configmap/dns-swift-storage-0 Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.198014 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "610d1bad-c2ce-4533-914b-ed46676dc3b8" (UID: "610d1bad-c2ce-4533-914b-ed46676dc3b8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.198405 4625 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.198432 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwg88\" (UniqueName: \"kubernetes.io/projected/610d1bad-c2ce-4533-914b-ed46676dc3b8-kube-api-access-kwg88\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.323651 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-config" (OuterVolumeSpecName: "config") pod "610d1bad-c2ce-4533-914b-ed46676dc3b8" (UID: "610d1bad-c2ce-4533-914b-ed46676dc3b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.382427 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "610d1bad-c2ce-4533-914b-ed46676dc3b8" (UID: "610d1bad-c2ce-4533-914b-ed46676dc3b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.382649 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "610d1bad-c2ce-4533-914b-ed46676dc3b8" (UID: "610d1bad-c2ce-4533-914b-ed46676dc3b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.385118 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "610d1bad-c2ce-4533-914b-ed46676dc3b8" (UID: "610d1bad-c2ce-4533-914b-ed46676dc3b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.402941 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.402984 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.402996 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.403007 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/610d1bad-c2ce-4533-914b-ed46676dc3b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.515039 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" event={"ID":"610d1bad-c2ce-4533-914b-ed46676dc3b8","Type":"ContainerDied","Data":"0cc788643c8487a468c5927caf77a8dd20c6e4fdaa93ac30efe951923ce79f54"} Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.515111 4625 scope.go:117] "RemoveContainer" containerID="cef341be6da69701ca3ee73d9fe643944a0d76363345a50d8cd1cb03ab585698" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.515263 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-z6sn6" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.599566 4625 scope.go:117] "RemoveContainer" containerID="d82c85f40dba2abfc1dec54bab1495ee85ee111e69c8195f5ba945009bf7ef75" Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.608696 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-z6sn6"] Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.626738 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-z6sn6"] Dec 02 14:10:03 crc kubenswrapper[4625]: I1202 14:10:03.807236 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54ffdb7d8c-skjm6"] Dec 02 14:10:04 crc kubenswrapper[4625]: I1202 14:10:04.529508 4625 generic.go:334] "Generic (PLEG): container finished" podID="5908f5de-9af5-4cde-abf8-5959a6c8648e" containerID="0c5946738700e6bb6b34010988941b02689f0d104c1912e61beecacaf91ab4df" exitCode=0 Dec 02 14:10:04 crc kubenswrapper[4625]: I1202 14:10:04.529726 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" event={"ID":"5908f5de-9af5-4cde-abf8-5959a6c8648e","Type":"ContainerDied","Data":"0c5946738700e6bb6b34010988941b02689f0d104c1912e61beecacaf91ab4df"} Dec 02 14:10:04 crc kubenswrapper[4625]: I1202 14:10:04.530083 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" event={"ID":"5908f5de-9af5-4cde-abf8-5959a6c8648e","Type":"ContainerStarted","Data":"192f67af841b10d597c059512ccddcca711ec2866668362d442a2ba304c2abdd"} Dec 02 14:10:04 crc kubenswrapper[4625]: I1202 14:10:04.870063 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610d1bad-c2ce-4533-914b-ed46676dc3b8" path="/var/lib/kubelet/pods/610d1bad-c2ce-4533-914b-ed46676dc3b8/volumes" Dec 02 14:10:05 crc kubenswrapper[4625]: I1202 14:10:05.543079 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" event={"ID":"5908f5de-9af5-4cde-abf8-5959a6c8648e","Type":"ContainerStarted","Data":"915a8730e017f5110c22a325898310b56721f20ac4f92d2670523f0853b5b1f0"} Dec 02 14:10:05 crc kubenswrapper[4625]: I1202 14:10:05.543248 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:05 crc kubenswrapper[4625]: I1202 14:10:05.572510 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" podStartSLOduration=3.572480461 podStartE2EDuration="3.572480461s" podCreationTimestamp="2025-12-02 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:10:05.56546197 +0000 UTC m=+1561.527639055" watchObservedRunningTime="2025-12-02 14:10:05.572480461 +0000 UTC m=+1561.534657536" Dec 02 14:10:08 crc kubenswrapper[4625]: I1202 14:10:08.846143 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-62h2h"] Dec 02 14:10:08 crc kubenswrapper[4625]: E1202 14:10:08.851150 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610d1bad-c2ce-4533-914b-ed46676dc3b8" containerName="init" Dec 02 14:10:08 crc kubenswrapper[4625]: I1202 14:10:08.851197 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="610d1bad-c2ce-4533-914b-ed46676dc3b8" containerName="init" Dec 02 14:10:08 crc kubenswrapper[4625]: E1202 14:10:08.851210 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610d1bad-c2ce-4533-914b-ed46676dc3b8" containerName="dnsmasq-dns" Dec 02 14:10:08 crc kubenswrapper[4625]: I1202 14:10:08.851217 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="610d1bad-c2ce-4533-914b-ed46676dc3b8" containerName="dnsmasq-dns" Dec 02 14:10:08 crc kubenswrapper[4625]: I1202 14:10:08.851603 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="610d1bad-c2ce-4533-914b-ed46676dc3b8" containerName="dnsmasq-dns" Dec 02 14:10:08 crc kubenswrapper[4625]: I1202 14:10:08.853350 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:08 crc kubenswrapper[4625]: I1202 14:10:08.874374 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62h2h"] Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.039696 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-catalog-content\") pod \"redhat-marketplace-62h2h\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.040277 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42kbb\" (UniqueName: \"kubernetes.io/projected/0cd825a5-89cf-42bb-987a-553a69f4839f-kube-api-access-42kbb\") pod \"redhat-marketplace-62h2h\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.040378 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-utilities\") pod \"redhat-marketplace-62h2h\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.141672 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-catalog-content\") pod \"redhat-marketplace-62h2h\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.141740 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42kbb\" (UniqueName: \"kubernetes.io/projected/0cd825a5-89cf-42bb-987a-553a69f4839f-kube-api-access-42kbb\") pod \"redhat-marketplace-62h2h\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.141830 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-utilities\") pod \"redhat-marketplace-62h2h\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.142478 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-utilities\") pod \"redhat-marketplace-62h2h\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.142559 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-catalog-content\") pod \"redhat-marketplace-62h2h\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.176763 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42kbb\" (UniqueName: \"kubernetes.io/projected/0cd825a5-89cf-42bb-987a-553a69f4839f-kube-api-access-42kbb\") pod \"redhat-marketplace-62h2h\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.184977 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:09 crc kubenswrapper[4625]: I1202 14:10:09.789245 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62h2h"] Dec 02 14:10:10 crc kubenswrapper[4625]: I1202 14:10:10.617503 4625 generic.go:334] "Generic (PLEG): container finished" podID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerID="6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4" exitCode=0 Dec 02 14:10:10 crc kubenswrapper[4625]: I1202 14:10:10.617616 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62h2h" event={"ID":"0cd825a5-89cf-42bb-987a-553a69f4839f","Type":"ContainerDied","Data":"6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4"} Dec 02 14:10:10 crc kubenswrapper[4625]: I1202 14:10:10.617984 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62h2h" event={"ID":"0cd825a5-89cf-42bb-987a-553a69f4839f","Type":"ContainerStarted","Data":"13c124c34bf963f2fc263956fcc4b3fa056b403b54761602d297c8d4e9d9fbb1"} Dec 02 14:10:12 crc kubenswrapper[4625]: I1202 14:10:12.645338 4625 generic.go:334] "Generic (PLEG): container finished" podID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerID="8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875" exitCode=0 Dec 02 14:10:12 crc kubenswrapper[4625]: I1202 14:10:12.645470 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62h2h" event={"ID":"0cd825a5-89cf-42bb-987a-553a69f4839f","Type":"ContainerDied","Data":"8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875"} Dec 02 14:10:12 crc kubenswrapper[4625]: I1202 14:10:12.883597 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54ffdb7d8c-skjm6" Dec 02 14:10:12 crc kubenswrapper[4625]: I1202 14:10:12.972940 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-j5jqf"] Dec 02 14:10:12 crc kubenswrapper[4625]: I1202 14:10:12.973299 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" podUID="fdebccf3-800b-4f0d-8058-eb9608d05a2e" containerName="dnsmasq-dns" containerID="cri-o://acdbb2c0f1e52e4882ccdc4b4a4b5fbd2a4a4d758a4cf2681f66f20ea8948a52" gracePeriod=10 Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.667459 4625 generic.go:334] "Generic (PLEG): container finished" podID="fdebccf3-800b-4f0d-8058-eb9608d05a2e" containerID="acdbb2c0f1e52e4882ccdc4b4a4b5fbd2a4a4d758a4cf2681f66f20ea8948a52" exitCode=0 Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.667624 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" event={"ID":"fdebccf3-800b-4f0d-8058-eb9608d05a2e","Type":"ContainerDied","Data":"acdbb2c0f1e52e4882ccdc4b4a4b5fbd2a4a4d758a4cf2681f66f20ea8948a52"} Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.669633 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" event={"ID":"fdebccf3-800b-4f0d-8058-eb9608d05a2e","Type":"ContainerDied","Data":"2158b025de2cb0266f09824048064584a6233049f894a21a07a301fff57e1cec"} Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.669650 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2158b025de2cb0266f09824048064584a6233049f894a21a07a301fff57e1cec" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.675658 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62h2h" event={"ID":"0cd825a5-89cf-42bb-987a-553a69f4839f","Type":"ContainerStarted","Data":"c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238"} Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.685301 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.709924 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-62h2h" podStartSLOduration=3.210631223 podStartE2EDuration="5.709898936s" podCreationTimestamp="2025-12-02 14:10:08 +0000 UTC" firstStartedPulling="2025-12-02 14:10:10.621448727 +0000 UTC m=+1566.583625802" lastFinishedPulling="2025-12-02 14:10:13.12071644 +0000 UTC m=+1569.082893515" observedRunningTime="2025-12-02 14:10:13.699871273 +0000 UTC m=+1569.662048338" watchObservedRunningTime="2025-12-02 14:10:13.709898936 +0000 UTC m=+1569.672076011" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.751390 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-nb\") pod \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.751433 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-sb\") pod \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.751475 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-config\") pod \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.751506 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k2ft\" (UniqueName: \"kubernetes.io/projected/fdebccf3-800b-4f0d-8058-eb9608d05a2e-kube-api-access-4k2ft\") pod \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.751541 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-openstack-edpm-ipam\") pod \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.751579 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-swift-storage-0\") pod \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.751636 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-svc\") pod \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\" (UID: \"fdebccf3-800b-4f0d-8058-eb9608d05a2e\") " Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.802040 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdebccf3-800b-4f0d-8058-eb9608d05a2e-kube-api-access-4k2ft" (OuterVolumeSpecName: "kube-api-access-4k2ft") pod "fdebccf3-800b-4f0d-8058-eb9608d05a2e" (UID: "fdebccf3-800b-4f0d-8058-eb9608d05a2e"). InnerVolumeSpecName "kube-api-access-4k2ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.859496 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k2ft\" (UniqueName: \"kubernetes.io/projected/fdebccf3-800b-4f0d-8058-eb9608d05a2e-kube-api-access-4k2ft\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.860456 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdebccf3-800b-4f0d-8058-eb9608d05a2e" (UID: "fdebccf3-800b-4f0d-8058-eb9608d05a2e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.884023 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fdebccf3-800b-4f0d-8058-eb9608d05a2e" (UID: "fdebccf3-800b-4f0d-8058-eb9608d05a2e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.904652 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-config" (OuterVolumeSpecName: "config") pod "fdebccf3-800b-4f0d-8058-eb9608d05a2e" (UID: "fdebccf3-800b-4f0d-8058-eb9608d05a2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.916651 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdebccf3-800b-4f0d-8058-eb9608d05a2e" (UID: "fdebccf3-800b-4f0d-8058-eb9608d05a2e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.926552 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "fdebccf3-800b-4f0d-8058-eb9608d05a2e" (UID: "fdebccf3-800b-4f0d-8058-eb9608d05a2e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.941453 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdebccf3-800b-4f0d-8058-eb9608d05a2e" (UID: "fdebccf3-800b-4f0d-8058-eb9608d05a2e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.960908 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.960963 4625 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.960977 4625 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-config\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.960990 4625 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.961006 4625 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:13 crc kubenswrapper[4625]: I1202 14:10:13.961017 4625 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdebccf3-800b-4f0d-8058-eb9608d05a2e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:14 crc kubenswrapper[4625]: I1202 14:10:14.687760 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-j5jqf" Dec 02 14:10:14 crc kubenswrapper[4625]: I1202 14:10:14.757365 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-j5jqf"] Dec 02 14:10:14 crc kubenswrapper[4625]: I1202 14:10:14.772083 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-j5jqf"] Dec 02 14:10:14 crc kubenswrapper[4625]: I1202 14:10:14.871824 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdebccf3-800b-4f0d-8058-eb9608d05a2e" path="/var/lib/kubelet/pods/fdebccf3-800b-4f0d-8058-eb9608d05a2e/volumes" Dec 02 14:10:19 crc kubenswrapper[4625]: I1202 14:10:19.185745 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:19 crc kubenswrapper[4625]: I1202 14:10:19.186667 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:19 crc kubenswrapper[4625]: I1202 14:10:19.242774 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:19 crc kubenswrapper[4625]: I1202 14:10:19.791972 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:20 crc kubenswrapper[4625]: I1202 14:10:20.491307 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62h2h"] Dec 02 14:10:21 crc kubenswrapper[4625]: I1202 14:10:21.759210 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-62h2h" podUID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerName="registry-server" containerID="cri-o://c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238" gracePeriod=2 Dec 02 14:10:22 crc kubenswrapper[4625]: I1202 14:10:22.981666 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:22 crc kubenswrapper[4625]: I1202 14:10:22.993623 4625 generic.go:334] "Generic (PLEG): container finished" podID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerID="c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238" exitCode=0 Dec 02 14:10:22 crc kubenswrapper[4625]: I1202 14:10:22.993695 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62h2h" event={"ID":"0cd825a5-89cf-42bb-987a-553a69f4839f","Type":"ContainerDied","Data":"c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238"} Dec 02 14:10:22 crc kubenswrapper[4625]: I1202 14:10:22.993743 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62h2h" event={"ID":"0cd825a5-89cf-42bb-987a-553a69f4839f","Type":"ContainerDied","Data":"13c124c34bf963f2fc263956fcc4b3fa056b403b54761602d297c8d4e9d9fbb1"} Dec 02 14:10:22 crc kubenswrapper[4625]: I1202 14:10:22.993769 4625 scope.go:117] "RemoveContainer" containerID="c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238" Dec 02 14:10:22 crc kubenswrapper[4625]: I1202 14:10:22.994154 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62h2h" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.059151 4625 scope.go:117] "RemoveContainer" containerID="8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.089077 4625 scope.go:117] "RemoveContainer" containerID="6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.184062 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42kbb\" (UniqueName: \"kubernetes.io/projected/0cd825a5-89cf-42bb-987a-553a69f4839f-kube-api-access-42kbb\") pod \"0cd825a5-89cf-42bb-987a-553a69f4839f\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.184299 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-utilities\") pod \"0cd825a5-89cf-42bb-987a-553a69f4839f\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.184367 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-catalog-content\") pod \"0cd825a5-89cf-42bb-987a-553a69f4839f\" (UID: \"0cd825a5-89cf-42bb-987a-553a69f4839f\") " Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.185929 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-utilities" (OuterVolumeSpecName: "utilities") pod "0cd825a5-89cf-42bb-987a-553a69f4839f" (UID: "0cd825a5-89cf-42bb-987a-553a69f4839f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.207336 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd825a5-89cf-42bb-987a-553a69f4839f-kube-api-access-42kbb" (OuterVolumeSpecName: "kube-api-access-42kbb") pod "0cd825a5-89cf-42bb-987a-553a69f4839f" (UID: "0cd825a5-89cf-42bb-987a-553a69f4839f"). InnerVolumeSpecName "kube-api-access-42kbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.213262 4625 scope.go:117] "RemoveContainer" containerID="c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238" Dec 02 14:10:23 crc kubenswrapper[4625]: E1202 14:10:23.214220 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238\": container with ID starting with c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238 not found: ID does not exist" containerID="c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.214335 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238"} err="failed to get container status \"c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238\": rpc error: code = NotFound desc = could not find container \"c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238\": container with ID starting with c6cfc228e76060e0e2f4234fb56004b3a1db84abf0580cfb678b22413fd15238 not found: ID does not exist" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.214388 4625 scope.go:117] "RemoveContainer" containerID="8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875" Dec 02 14:10:23 crc kubenswrapper[4625]: E1202 14:10:23.215197 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875\": container with ID starting with 8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875 not found: ID does not exist" containerID="8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.215257 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875"} err="failed to get container status \"8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875\": rpc error: code = NotFound desc = could not find container \"8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875\": container with ID starting with 8c2c6c4ffdef37176c0d4d8d1464f479eb90e06c77596f21f320e2adf0d97875 not found: ID does not exist" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.215298 4625 scope.go:117] "RemoveContainer" containerID="6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4" Dec 02 14:10:23 crc kubenswrapper[4625]: E1202 14:10:23.215631 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4\": container with ID starting with 6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4 not found: ID does not exist" containerID="6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.215668 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4"} err="failed to get container status \"6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4\": rpc error: code = NotFound desc = could not find container \"6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4\": container with ID starting with 6e88de9fcac657f018a7f1c884a9a0247d3e1b8aa7b164ceb4a36471afcdaca4 not found: ID does not exist" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.252766 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cd825a5-89cf-42bb-987a-553a69f4839f" (UID: "0cd825a5-89cf-42bb-987a-553a69f4839f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.287536 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42kbb\" (UniqueName: \"kubernetes.io/projected/0cd825a5-89cf-42bb-987a-553a69f4839f-kube-api-access-42kbb\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.287581 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.287593 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cd825a5-89cf-42bb-987a-553a69f4839f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.357086 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62h2h"] Dec 02 14:10:23 crc kubenswrapper[4625]: I1202 14:10:23.369110 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-62h2h"] Dec 02 14:10:24 crc kubenswrapper[4625]: I1202 14:10:24.872297 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cd825a5-89cf-42bb-987a-553a69f4839f" path="/var/lib/kubelet/pods/0cd825a5-89cf-42bb-987a-553a69f4839f/volumes" Dec 02 14:10:29 crc kubenswrapper[4625]: I1202 14:10:29.061752 4625 generic.go:334] "Generic (PLEG): container finished" podID="5eb1d307-4690-436e-8f82-a27eff014c84" containerID="2b3067ff9dda4c6ab551f8c410076bb694956cbfb196184213b76583c012def0" exitCode=0 Dec 02 14:10:29 crc kubenswrapper[4625]: I1202 14:10:29.061849 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5eb1d307-4690-436e-8f82-a27eff014c84","Type":"ContainerDied","Data":"2b3067ff9dda4c6ab551f8c410076bb694956cbfb196184213b76583c012def0"} Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.142886 4625 patch_prober.go:28] interesting pod/router-default-5444994796-pqzl9 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.144469 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-pqzl9" podUID="5ca4e0fc-6aab-4f08-afdf-d61583c63f6f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.801404 4625 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g2crb container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": context deadline exceeded" start-of-body= Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.801647 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" podUID="7e8d817d-9152-48c4-b7b0-f9df76891753" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": context deadline exceeded" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.815183 4625 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qcjf2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: i/o timeout" start-of-body= Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.815291 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" podUID="7c402f2a-3e9f-4eba-a881-a59ae3626f5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: i/o timeout" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.834869 4625 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g2crb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: i/o timeout" start-of-body= Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.834968 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2crb" podUID="7e8d817d-9152-48c4-b7b0-f9df76891753" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: i/o timeout" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.881766 4625 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9vd9w container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.882501 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" podUID="2ec55e1a-74d5-4c19-abde-2b8d8e9f392c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.882951 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-m2rh6" podUID="2bdff728-939b-414c-a0e9-35520fc54d71" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.883503 4625 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9vd9w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.883553 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9vd9w" podUID="2ec55e1a-74d5-4c19-abde-2b8d8e9f392c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.883580 4625 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qcjf2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.883599 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qcjf2" podUID="7c402f2a-3e9f-4eba-a881-a59ae3626f5a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.883944 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-m2rh6" podUID="2bdff728-939b-414c-a0e9-35520fc54d71" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.887646 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-72kpb" podUID="aad37202-ae48-4da9-b478-fad57dd764f2" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 02 14:10:31 crc kubenswrapper[4625]: E1202 14:10:31.928169 4625 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.072s" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.956109 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5eb1d307-4690-436e-8f82-a27eff014c84","Type":"ContainerStarted","Data":"afe07d74bdf632007210cb8a96ab56cca81762872a820efb443529e204b6c40e"} Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.957423 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.981326 4625 generic.go:334] "Generic (PLEG): container finished" podID="50ba9ca8-e722-4c48-9435-a358d35a893e" containerID="d3d00ddb215839075fcf2a7caf835732d5be34e02372839c1708f5f007f317ca" exitCode=0 Dec 02 14:10:31 crc kubenswrapper[4625]: I1202 14:10:31.981406 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ba9ca8-e722-4c48-9435-a358d35a893e","Type":"ContainerDied","Data":"d3d00ddb215839075fcf2a7caf835732d5be34e02372839c1708f5f007f317ca"} Dec 02 14:10:32 crc kubenswrapper[4625]: I1202 14:10:32.022283 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.02225066 podStartE2EDuration="39.02225066s" podCreationTimestamp="2025-12-02 14:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:10:32.009890454 +0000 UTC m=+1587.972067529" watchObservedRunningTime="2025-12-02 14:10:32.02225066 +0000 UTC m=+1587.984427735" Dec 02 14:10:32 crc kubenswrapper[4625]: I1202 14:10:32.995884 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50ba9ca8-e722-4c48-9435-a358d35a893e","Type":"ContainerStarted","Data":"2fbd0ec6f1dcec704e7aca928d4ea7ad5bcd12151c9c7ba20cfebf5f4410ed84"} Dec 02 14:10:32 crc kubenswrapper[4625]: I1202 14:10:32.996688 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:10:33 crc kubenswrapper[4625]: I1202 14:10:33.059576 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.059549066 podStartE2EDuration="39.059549066s" podCreationTimestamp="2025-12-02 14:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:10:33.055586728 +0000 UTC m=+1589.017763803" watchObservedRunningTime="2025-12-02 14:10:33.059549066 +0000 UTC m=+1589.021726141" Dec 02 14:10:33 crc kubenswrapper[4625]: I1202 14:10:33.809456 4625 scope.go:117] "RemoveContainer" containerID="42475adc2afe6413daadfca967f6430024b3163474318d831113c1c253391c87" Dec 02 14:10:33 crc kubenswrapper[4625]: I1202 14:10:33.837762 4625 scope.go:117] "RemoveContainer" containerID="6b1c012b42008cd93af2bfd48d720b760679aabff26c98770df03c8325c20d77" Dec 02 14:10:33 crc kubenswrapper[4625]: I1202 14:10:33.865089 4625 scope.go:117] "RemoveContainer" containerID="74423cd7136a03916398e9f5880e9c2c9d458ff77d53254c811315bc8aadc6f2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.497009 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2"] Dec 02 14:10:36 crc kubenswrapper[4625]: E1202 14:10:36.498020 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdebccf3-800b-4f0d-8058-eb9608d05a2e" containerName="dnsmasq-dns" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.498038 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdebccf3-800b-4f0d-8058-eb9608d05a2e" containerName="dnsmasq-dns" Dec 02 14:10:36 crc kubenswrapper[4625]: E1202 14:10:36.498062 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerName="registry-server" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.498068 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerName="registry-server" Dec 02 14:10:36 crc kubenswrapper[4625]: E1202 14:10:36.498081 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerName="extract-utilities" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.498086 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerName="extract-utilities" Dec 02 14:10:36 crc kubenswrapper[4625]: E1202 14:10:36.498103 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerName="extract-content" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.498109 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerName="extract-content" Dec 02 14:10:36 crc kubenswrapper[4625]: E1202 14:10:36.498142 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdebccf3-800b-4f0d-8058-eb9608d05a2e" containerName="init" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.498149 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdebccf3-800b-4f0d-8058-eb9608d05a2e" containerName="init" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.502899 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cd825a5-89cf-42bb-987a-553a69f4839f" containerName="registry-server" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.502951 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdebccf3-800b-4f0d-8058-eb9608d05a2e" containerName="dnsmasq-dns" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.504106 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.507185 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.507299 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.507487 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.507662 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.516559 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2"] Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.797559 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.797651 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzplv\" (UniqueName: \"kubernetes.io/projected/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-kube-api-access-lzplv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.797719 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.797752 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.903063 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.903159 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzplv\" (UniqueName: \"kubernetes.io/projected/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-kube-api-access-lzplv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.903221 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.903264 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.919073 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.933343 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.937097 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzplv\" (UniqueName: \"kubernetes.io/projected/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-kube-api-access-lzplv\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:36 crc kubenswrapper[4625]: I1202 14:10:36.949729 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:37 crc kubenswrapper[4625]: I1202 14:10:37.015791 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:10:37 crc kubenswrapper[4625]: I1202 14:10:37.751979 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2"] Dec 02 14:10:38 crc kubenswrapper[4625]: I1202 14:10:38.070854 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" event={"ID":"d28486df-5cbd-4cf1-ab77-3bb7c4582d36","Type":"ContainerStarted","Data":"ed3294b04267b231811e132aa0050da83c8235dc7267260b324ef9cbc080418f"} Dec 02 14:10:43 crc kubenswrapper[4625]: I1202 14:10:43.762509 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="5eb1d307-4690-436e-8f82-a27eff014c84" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.205:5671: connect: connection refused" Dec 02 14:10:45 crc kubenswrapper[4625]: I1202 14:10:45.544633 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 02 14:10:53 crc kubenswrapper[4625]: I1202 14:10:53.366240 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" event={"ID":"d28486df-5cbd-4cf1-ab77-3bb7c4582d36","Type":"ContainerStarted","Data":"fdb159a8c134b56bad53a7e89e007b5ba1cdd9af519a3ebd03f279bd063a712c"} Dec 02 14:10:53 crc kubenswrapper[4625]: I1202 14:10:53.398234 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" podStartSLOduration=2.869215744 podStartE2EDuration="17.398206556s" podCreationTimestamp="2025-12-02 14:10:36 +0000 UTC" firstStartedPulling="2025-12-02 14:10:37.780398007 +0000 UTC m=+1593.742575082" lastFinishedPulling="2025-12-02 14:10:52.309388819 +0000 UTC m=+1608.271565894" observedRunningTime="2025-12-02 14:10:53.394391122 +0000 UTC m=+1609.356568197" watchObservedRunningTime="2025-12-02 14:10:53.398206556 +0000 UTC m=+1609.360383631" Dec 02 14:10:53 crc kubenswrapper[4625]: I1202 14:10:53.757620 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 02 14:11:06 crc kubenswrapper[4625]: I1202 14:11:06.560390 4625 generic.go:334] "Generic (PLEG): container finished" podID="d28486df-5cbd-4cf1-ab77-3bb7c4582d36" containerID="fdb159a8c134b56bad53a7e89e007b5ba1cdd9af519a3ebd03f279bd063a712c" exitCode=0 Dec 02 14:11:06 crc kubenswrapper[4625]: I1202 14:11:06.561191 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" event={"ID":"d28486df-5cbd-4cf1-ab77-3bb7c4582d36","Type":"ContainerDied","Data":"fdb159a8c134b56bad53a7e89e007b5ba1cdd9af519a3ebd03f279bd063a712c"} Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.175911 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.198272 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-ssh-key\") pod \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.198827 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-inventory\") pod \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.198949 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzplv\" (UniqueName: \"kubernetes.io/projected/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-kube-api-access-lzplv\") pod \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.199196 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-repo-setup-combined-ca-bundle\") pod \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\" (UID: \"d28486df-5cbd-4cf1-ab77-3bb7c4582d36\") " Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.208228 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-kube-api-access-lzplv" (OuterVolumeSpecName: "kube-api-access-lzplv") pod "d28486df-5cbd-4cf1-ab77-3bb7c4582d36" (UID: "d28486df-5cbd-4cf1-ab77-3bb7c4582d36"). InnerVolumeSpecName "kube-api-access-lzplv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.208330 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d28486df-5cbd-4cf1-ab77-3bb7c4582d36" (UID: "d28486df-5cbd-4cf1-ab77-3bb7c4582d36"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.259959 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d28486df-5cbd-4cf1-ab77-3bb7c4582d36" (UID: "d28486df-5cbd-4cf1-ab77-3bb7c4582d36"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.285502 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-inventory" (OuterVolumeSpecName: "inventory") pod "d28486df-5cbd-4cf1-ab77-3bb7c4582d36" (UID: "d28486df-5cbd-4cf1-ab77-3bb7c4582d36"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.302481 4625 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.302526 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.302547 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.302557 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzplv\" (UniqueName: \"kubernetes.io/projected/d28486df-5cbd-4cf1-ab77-3bb7c4582d36-kube-api-access-lzplv\") on node \"crc\" DevicePath \"\"" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.607122 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" event={"ID":"d28486df-5cbd-4cf1-ab77-3bb7c4582d36","Type":"ContainerDied","Data":"ed3294b04267b231811e132aa0050da83c8235dc7267260b324ef9cbc080418f"} Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.607182 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed3294b04267b231811e132aa0050da83c8235dc7267260b324ef9cbc080418f" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.607253 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.724618 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6"] Dec 02 14:11:08 crc kubenswrapper[4625]: E1202 14:11:08.725238 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28486df-5cbd-4cf1-ab77-3bb7c4582d36" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.725268 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28486df-5cbd-4cf1-ab77-3bb7c4582d36" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.729888 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28486df-5cbd-4cf1-ab77-3bb7c4582d36" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.730811 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.735375 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.735572 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.735647 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.735880 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.736415 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6"] Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.822142 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr868\" (UniqueName: \"kubernetes.io/projected/1062a08b-7d27-49af-bc24-3d8aae739f10-kube-api-access-zr868\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4d7z6\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.822286 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4d7z6\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.822387 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4d7z6\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.923249 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4d7z6\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.924905 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4d7z6\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.926176 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr868\" (UniqueName: \"kubernetes.io/projected/1062a08b-7d27-49af-bc24-3d8aae739f10-kube-api-access-zr868\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4d7z6\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.929097 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4d7z6\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.929141 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4d7z6\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:08 crc kubenswrapper[4625]: I1202 14:11:08.945215 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr868\" (UniqueName: \"kubernetes.io/projected/1062a08b-7d27-49af-bc24-3d8aae739f10-kube-api-access-zr868\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4d7z6\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:09 crc kubenswrapper[4625]: I1202 14:11:09.063528 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:09 crc kubenswrapper[4625]: I1202 14:11:09.716267 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6"] Dec 02 14:11:09 crc kubenswrapper[4625]: W1202 14:11:09.723366 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1062a08b_7d27_49af_bc24_3d8aae739f10.slice/crio-2e8360e52cd8ecca28068dcbf7f22d041965d270c3d8970f0f9246b9f2db3c96 WatchSource:0}: Error finding container 2e8360e52cd8ecca28068dcbf7f22d041965d270c3d8970f0f9246b9f2db3c96: Status 404 returned error can't find the container with id 2e8360e52cd8ecca28068dcbf7f22d041965d270c3d8970f0f9246b9f2db3c96 Dec 02 14:11:10 crc kubenswrapper[4625]: I1202 14:11:10.629843 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" event={"ID":"1062a08b-7d27-49af-bc24-3d8aae739f10","Type":"ContainerStarted","Data":"2e8360e52cd8ecca28068dcbf7f22d041965d270c3d8970f0f9246b9f2db3c96"} Dec 02 14:11:11 crc kubenswrapper[4625]: I1202 14:11:11.644107 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" event={"ID":"1062a08b-7d27-49af-bc24-3d8aae739f10","Type":"ContainerStarted","Data":"c8aaed8fdd5208d95a40f440a410c79eaa5436f92b2105648066c4cb966f78ef"} Dec 02 14:11:11 crc kubenswrapper[4625]: I1202 14:11:11.676419 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" podStartSLOduration=2.952992194 podStartE2EDuration="3.676380231s" podCreationTimestamp="2025-12-02 14:11:08 +0000 UTC" firstStartedPulling="2025-12-02 14:11:09.72566256 +0000 UTC m=+1625.687839635" lastFinishedPulling="2025-12-02 14:11:10.449050597 +0000 UTC m=+1626.411227672" observedRunningTime="2025-12-02 14:11:11.663680106 +0000 UTC m=+1627.625857191" watchObservedRunningTime="2025-12-02 14:11:11.676380231 +0000 UTC m=+1627.638557306" Dec 02 14:11:13 crc kubenswrapper[4625]: I1202 14:11:13.669760 4625 generic.go:334] "Generic (PLEG): container finished" podID="1062a08b-7d27-49af-bc24-3d8aae739f10" containerID="c8aaed8fdd5208d95a40f440a410c79eaa5436f92b2105648066c4cb966f78ef" exitCode=0 Dec 02 14:11:13 crc kubenswrapper[4625]: I1202 14:11:13.669834 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" event={"ID":"1062a08b-7d27-49af-bc24-3d8aae739f10","Type":"ContainerDied","Data":"c8aaed8fdd5208d95a40f440a410c79eaa5436f92b2105648066c4cb966f78ef"} Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.143037 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.208991 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-inventory\") pod \"1062a08b-7d27-49af-bc24-3d8aae739f10\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.209199 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-ssh-key\") pod \"1062a08b-7d27-49af-bc24-3d8aae739f10\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.209334 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr868\" (UniqueName: \"kubernetes.io/projected/1062a08b-7d27-49af-bc24-3d8aae739f10-kube-api-access-zr868\") pod \"1062a08b-7d27-49af-bc24-3d8aae739f10\" (UID: \"1062a08b-7d27-49af-bc24-3d8aae739f10\") " Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.230763 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1062a08b-7d27-49af-bc24-3d8aae739f10-kube-api-access-zr868" (OuterVolumeSpecName: "kube-api-access-zr868") pod "1062a08b-7d27-49af-bc24-3d8aae739f10" (UID: "1062a08b-7d27-49af-bc24-3d8aae739f10"). InnerVolumeSpecName "kube-api-access-zr868". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.271267 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-inventory" (OuterVolumeSpecName: "inventory") pod "1062a08b-7d27-49af-bc24-3d8aae739f10" (UID: "1062a08b-7d27-49af-bc24-3d8aae739f10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.273097 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1062a08b-7d27-49af-bc24-3d8aae739f10" (UID: "1062a08b-7d27-49af-bc24-3d8aae739f10"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.312821 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.312877 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1062a08b-7d27-49af-bc24-3d8aae739f10-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.312890 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr868\" (UniqueName: \"kubernetes.io/projected/1062a08b-7d27-49af-bc24-3d8aae739f10-kube-api-access-zr868\") on node \"crc\" DevicePath \"\"" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.695583 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.695535 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4d7z6" event={"ID":"1062a08b-7d27-49af-bc24-3d8aae739f10","Type":"ContainerDied","Data":"2e8360e52cd8ecca28068dcbf7f22d041965d270c3d8970f0f9246b9f2db3c96"} Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.696183 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8360e52cd8ecca28068dcbf7f22d041965d270c3d8970f0f9246b9f2db3c96" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.819949 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb"] Dec 02 14:11:15 crc kubenswrapper[4625]: E1202 14:11:15.820486 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1062a08b-7d27-49af-bc24-3d8aae739f10" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.820507 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="1062a08b-7d27-49af-bc24-3d8aae739f10" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.820744 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="1062a08b-7d27-49af-bc24-3d8aae739f10" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.821485 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.827999 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.828199 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.828231 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.828506 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.838102 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb"] Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.932404 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jntvr\" (UniqueName: \"kubernetes.io/projected/b31c21d6-4087-4521-8566-14b2eeabb679-kube-api-access-jntvr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.932775 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.932889 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:15 crc kubenswrapper[4625]: I1202 14:11:15.933051 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:16 crc kubenswrapper[4625]: I1202 14:11:16.035297 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:16 crc kubenswrapper[4625]: I1202 14:11:16.035708 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:16 crc kubenswrapper[4625]: I1202 14:11:16.035823 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:16 crc kubenswrapper[4625]: I1202 14:11:16.035923 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jntvr\" (UniqueName: \"kubernetes.io/projected/b31c21d6-4087-4521-8566-14b2eeabb679-kube-api-access-jntvr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:16 crc kubenswrapper[4625]: I1202 14:11:16.040550 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:16 crc kubenswrapper[4625]: I1202 14:11:16.041296 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:16 crc kubenswrapper[4625]: I1202 14:11:16.057292 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:16 crc kubenswrapper[4625]: I1202 14:11:16.065498 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jntvr\" (UniqueName: \"kubernetes.io/projected/b31c21d6-4087-4521-8566-14b2eeabb679-kube-api-access-jntvr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:16 crc kubenswrapper[4625]: I1202 14:11:16.172037 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:11:16 crc kubenswrapper[4625]: I1202 14:11:16.807080 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb"] Dec 02 14:11:17 crc kubenswrapper[4625]: I1202 14:11:17.720910 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" event={"ID":"b31c21d6-4087-4521-8566-14b2eeabb679","Type":"ContainerStarted","Data":"b2ff5f1ce0f33ecba4d959d9bd59ae7a92edb2ac3e00e7ebbcce86d6298ed334"} Dec 02 14:11:17 crc kubenswrapper[4625]: I1202 14:11:17.721881 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" event={"ID":"b31c21d6-4087-4521-8566-14b2eeabb679","Type":"ContainerStarted","Data":"194af72dd24becfaa9f69749f19453cc5c6baaf20773ae693b02dbd4fd00db18"} Dec 02 14:11:17 crc kubenswrapper[4625]: I1202 14:11:17.746099 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" podStartSLOduration=2.26695437 podStartE2EDuration="2.746075233s" podCreationTimestamp="2025-12-02 14:11:15 +0000 UTC" firstStartedPulling="2025-12-02 14:11:16.830957851 +0000 UTC m=+1632.793134926" lastFinishedPulling="2025-12-02 14:11:17.310078714 +0000 UTC m=+1633.272255789" observedRunningTime="2025-12-02 14:11:17.742826875 +0000 UTC m=+1633.705003940" watchObservedRunningTime="2025-12-02 14:11:17.746075233 +0000 UTC m=+1633.708252308" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.529461 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vwm9b"] Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.532059 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.567563 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwm9b"] Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.615134 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vgg\" (UniqueName: \"kubernetes.io/projected/de9918c5-7ca4-4315-babb-40467da124a7-kube-api-access-t9vgg\") pod \"certified-operators-vwm9b\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.615276 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-utilities\") pod \"certified-operators-vwm9b\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.615304 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-catalog-content\") pod \"certified-operators-vwm9b\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.717598 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-utilities\") pod \"certified-operators-vwm9b\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.717682 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-catalog-content\") pod \"certified-operators-vwm9b\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.717865 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9vgg\" (UniqueName: \"kubernetes.io/projected/de9918c5-7ca4-4315-babb-40467da124a7-kube-api-access-t9vgg\") pod \"certified-operators-vwm9b\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.718281 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-utilities\") pod \"certified-operators-vwm9b\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.718364 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-catalog-content\") pod \"certified-operators-vwm9b\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.753298 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9vgg\" (UniqueName: \"kubernetes.io/projected/de9918c5-7ca4-4315-babb-40467da124a7-kube-api-access-t9vgg\") pod \"certified-operators-vwm9b\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:18 crc kubenswrapper[4625]: I1202 14:11:18.861856 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:19 crc kubenswrapper[4625]: I1202 14:11:19.272494 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:11:19 crc kubenswrapper[4625]: I1202 14:11:19.273026 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:11:19 crc kubenswrapper[4625]: I1202 14:11:19.440785 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwm9b"] Dec 02 14:11:19 crc kubenswrapper[4625]: I1202 14:11:19.753010 4625 generic.go:334] "Generic (PLEG): container finished" podID="de9918c5-7ca4-4315-babb-40467da124a7" containerID="c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46" exitCode=0 Dec 02 14:11:19 crc kubenswrapper[4625]: I1202 14:11:19.753140 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwm9b" event={"ID":"de9918c5-7ca4-4315-babb-40467da124a7","Type":"ContainerDied","Data":"c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46"} Dec 02 14:11:19 crc kubenswrapper[4625]: I1202 14:11:19.753518 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwm9b" event={"ID":"de9918c5-7ca4-4315-babb-40467da124a7","Type":"ContainerStarted","Data":"1c29fa77635fc68c3a984b8b345229d28c26f30e7d823b66e344c1267720c927"} Dec 02 14:11:21 crc kubenswrapper[4625]: I1202 14:11:21.788710 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwm9b" event={"ID":"de9918c5-7ca4-4315-babb-40467da124a7","Type":"ContainerStarted","Data":"e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e"} Dec 02 14:11:22 crc kubenswrapper[4625]: I1202 14:11:22.801599 4625 generic.go:334] "Generic (PLEG): container finished" podID="de9918c5-7ca4-4315-babb-40467da124a7" containerID="e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e" exitCode=0 Dec 02 14:11:22 crc kubenswrapper[4625]: I1202 14:11:22.802136 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwm9b" event={"ID":"de9918c5-7ca4-4315-babb-40467da124a7","Type":"ContainerDied","Data":"e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e"} Dec 02 14:11:23 crc kubenswrapper[4625]: I1202 14:11:23.814637 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwm9b" event={"ID":"de9918c5-7ca4-4315-babb-40467da124a7","Type":"ContainerStarted","Data":"39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d"} Dec 02 14:11:23 crc kubenswrapper[4625]: I1202 14:11:23.850135 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vwm9b" podStartSLOduration=2.192709994 podStartE2EDuration="5.850105618s" podCreationTimestamp="2025-12-02 14:11:18 +0000 UTC" firstStartedPulling="2025-12-02 14:11:19.755743408 +0000 UTC m=+1635.717920483" lastFinishedPulling="2025-12-02 14:11:23.413139032 +0000 UTC m=+1639.375316107" observedRunningTime="2025-12-02 14:11:23.837541626 +0000 UTC m=+1639.799718701" watchObservedRunningTime="2025-12-02 14:11:23.850105618 +0000 UTC m=+1639.812282693" Dec 02 14:11:28 crc kubenswrapper[4625]: I1202 14:11:28.875094 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:28 crc kubenswrapper[4625]: I1202 14:11:28.875978 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:28 crc kubenswrapper[4625]: I1202 14:11:28.917494 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:29 crc kubenswrapper[4625]: I1202 14:11:29.926403 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:30 crc kubenswrapper[4625]: I1202 14:11:30.020520 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwm9b"] Dec 02 14:11:31 crc kubenswrapper[4625]: I1202 14:11:31.898406 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vwm9b" podUID="de9918c5-7ca4-4315-babb-40467da124a7" containerName="registry-server" containerID="cri-o://39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d" gracePeriod=2 Dec 02 14:11:32 crc kubenswrapper[4625]: E1202 14:11:32.178300 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde9918c5_7ca4_4315_babb_40467da124a7.slice/crio-conmon-39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde9918c5_7ca4_4315_babb_40467da124a7.slice/crio-39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.437722 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.586492 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9vgg\" (UniqueName: \"kubernetes.io/projected/de9918c5-7ca4-4315-babb-40467da124a7-kube-api-access-t9vgg\") pod \"de9918c5-7ca4-4315-babb-40467da124a7\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.587017 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-utilities\") pod \"de9918c5-7ca4-4315-babb-40467da124a7\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.587242 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-catalog-content\") pod \"de9918c5-7ca4-4315-babb-40467da124a7\" (UID: \"de9918c5-7ca4-4315-babb-40467da124a7\") " Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.587889 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-utilities" (OuterVolumeSpecName: "utilities") pod "de9918c5-7ca4-4315-babb-40467da124a7" (UID: "de9918c5-7ca4-4315-babb-40467da124a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.593959 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9918c5-7ca4-4315-babb-40467da124a7-kube-api-access-t9vgg" (OuterVolumeSpecName: "kube-api-access-t9vgg") pod "de9918c5-7ca4-4315-babb-40467da124a7" (UID: "de9918c5-7ca4-4315-babb-40467da124a7"). InnerVolumeSpecName "kube-api-access-t9vgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.648256 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de9918c5-7ca4-4315-babb-40467da124a7" (UID: "de9918c5-7ca4-4315-babb-40467da124a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.689741 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.689794 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9vgg\" (UniqueName: \"kubernetes.io/projected/de9918c5-7ca4-4315-babb-40467da124a7-kube-api-access-t9vgg\") on node \"crc\" DevicePath \"\"" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.689808 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9918c5-7ca4-4315-babb-40467da124a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.912442 4625 generic.go:334] "Generic (PLEG): container finished" podID="de9918c5-7ca4-4315-babb-40467da124a7" containerID="39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d" exitCode=0 Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.912519 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwm9b" event={"ID":"de9918c5-7ca4-4315-babb-40467da124a7","Type":"ContainerDied","Data":"39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d"} Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.912568 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwm9b" event={"ID":"de9918c5-7ca4-4315-babb-40467da124a7","Type":"ContainerDied","Data":"1c29fa77635fc68c3a984b8b345229d28c26f30e7d823b66e344c1267720c927"} Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.912592 4625 scope.go:117] "RemoveContainer" containerID="39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.912784 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwm9b" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.946828 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwm9b"] Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.961086 4625 scope.go:117] "RemoveContainer" containerID="e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e" Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.969973 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vwm9b"] Dec 02 14:11:32 crc kubenswrapper[4625]: I1202 14:11:32.998263 4625 scope.go:117] "RemoveContainer" containerID="c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46" Dec 02 14:11:33 crc kubenswrapper[4625]: I1202 14:11:33.038928 4625 scope.go:117] "RemoveContainer" containerID="39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d" Dec 02 14:11:33 crc kubenswrapper[4625]: E1202 14:11:33.039895 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d\": container with ID starting with 39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d not found: ID does not exist" containerID="39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d" Dec 02 14:11:33 crc kubenswrapper[4625]: I1202 14:11:33.039970 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d"} err="failed to get container status \"39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d\": rpc error: code = NotFound desc = could not find container \"39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d\": container with ID starting with 39ef1ae12b8729ac7852cb21d96aede4b5f4cc786e06bdc1158783c20288461d not found: ID does not exist" Dec 02 14:11:33 crc kubenswrapper[4625]: I1202 14:11:33.040008 4625 scope.go:117] "RemoveContainer" containerID="e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e" Dec 02 14:11:33 crc kubenswrapper[4625]: E1202 14:11:33.040381 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e\": container with ID starting with e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e not found: ID does not exist" containerID="e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e" Dec 02 14:11:33 crc kubenswrapper[4625]: I1202 14:11:33.040405 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e"} err="failed to get container status \"e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e\": rpc error: code = NotFound desc = could not find container \"e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e\": container with ID starting with e8072db68c016817e25f77c96dd7076e32411bfdae795b3b2a868201e9cd6e4e not found: ID does not exist" Dec 02 14:11:33 crc kubenswrapper[4625]: I1202 14:11:33.040421 4625 scope.go:117] "RemoveContainer" containerID="c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46" Dec 02 14:11:33 crc kubenswrapper[4625]: E1202 14:11:33.040705 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46\": container with ID starting with c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46 not found: ID does not exist" containerID="c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46" Dec 02 14:11:33 crc kubenswrapper[4625]: I1202 14:11:33.040730 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46"} err="failed to get container status \"c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46\": rpc error: code = NotFound desc = could not find container \"c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46\": container with ID starting with c26c7f41d3df37280e7246bc311ebcfd2c03742c7e7789fdf11135bc8a8c0e46 not found: ID does not exist" Dec 02 14:11:34 crc kubenswrapper[4625]: I1202 14:11:34.126971 4625 scope.go:117] "RemoveContainer" containerID="dde77d1e0c32452a78e4539fea8157b9f734116b752418838b9767f2d7e247bd" Dec 02 14:11:34 crc kubenswrapper[4625]: I1202 14:11:34.869676 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9918c5-7ca4-4315-babb-40467da124a7" path="/var/lib/kubelet/pods/de9918c5-7ca4-4315-babb-40467da124a7/volumes" Dec 02 14:11:49 crc kubenswrapper[4625]: I1202 14:11:49.272126 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:11:49 crc kubenswrapper[4625]: I1202 14:11:49.273080 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:12:19 crc kubenswrapper[4625]: I1202 14:12:19.270973 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:12:19 crc kubenswrapper[4625]: I1202 14:12:19.271915 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:12:19 crc kubenswrapper[4625]: I1202 14:12:19.271999 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:12:19 crc kubenswrapper[4625]: I1202 14:12:19.272757 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:12:19 crc kubenswrapper[4625]: I1202 14:12:19.272816 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" gracePeriod=600 Dec 02 14:12:19 crc kubenswrapper[4625]: E1202 14:12:19.406648 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:12:19 crc kubenswrapper[4625]: I1202 14:12:19.444902 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" exitCode=0 Dec 02 14:12:19 crc kubenswrapper[4625]: I1202 14:12:19.444986 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03"} Dec 02 14:12:19 crc kubenswrapper[4625]: I1202 14:12:19.445071 4625 scope.go:117] "RemoveContainer" containerID="22eacb360cbd64994ad7dde3fa2964df2620c7bf593d351571346615fdf674ec" Dec 02 14:12:19 crc kubenswrapper[4625]: I1202 14:12:19.446917 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:12:19 crc kubenswrapper[4625]: E1202 14:12:19.447526 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:12:32 crc kubenswrapper[4625]: I1202 14:12:32.856995 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:12:32 crc kubenswrapper[4625]: E1202 14:12:32.858447 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:12:34 crc kubenswrapper[4625]: I1202 14:12:34.226741 4625 scope.go:117] "RemoveContainer" containerID="7155cfa7de55070cff7a7874be6fc821cc88ed501a0c167b3af01a33182482be" Dec 02 14:12:34 crc kubenswrapper[4625]: I1202 14:12:34.258483 4625 scope.go:117] "RemoveContainer" containerID="5e83dc7b7fdf96cd7bac9f6787e4ccf5f911ecfec30c3db5c9ed177836a6e6dc" Dec 02 14:12:34 crc kubenswrapper[4625]: I1202 14:12:34.294604 4625 scope.go:117] "RemoveContainer" containerID="28d747eb014b8634711c10ec401da7a4968d468a1066623e6a3162f685292536" Dec 02 14:12:34 crc kubenswrapper[4625]: I1202 14:12:34.322531 4625 scope.go:117] "RemoveContainer" containerID="1a3cf40229a149b6f96d45542d9e2d953ac5c335e7d59fd18c322e334129e014" Dec 02 14:12:43 crc kubenswrapper[4625]: I1202 14:12:43.856614 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:12:43 crc kubenswrapper[4625]: E1202 14:12:43.857847 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:12:54 crc kubenswrapper[4625]: I1202 14:12:54.862787 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:12:54 crc kubenswrapper[4625]: E1202 14:12:54.864335 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:13:09 crc kubenswrapper[4625]: I1202 14:13:09.857213 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:13:09 crc kubenswrapper[4625]: E1202 14:13:09.858584 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:13:24 crc kubenswrapper[4625]: I1202 14:13:24.866975 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:13:24 crc kubenswrapper[4625]: E1202 14:13:24.868108 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:13:38 crc kubenswrapper[4625]: I1202 14:13:38.857898 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:13:38 crc kubenswrapper[4625]: E1202 14:13:38.859695 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:13:49 crc kubenswrapper[4625]: I1202 14:13:49.059365 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-b2wb8"] Dec 02 14:13:49 crc kubenswrapper[4625]: I1202 14:13:49.069702 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7f64-account-create-update-v2ng7"] Dec 02 14:13:49 crc kubenswrapper[4625]: I1202 14:13:49.082111 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7f64-account-create-update-v2ng7"] Dec 02 14:13:49 crc kubenswrapper[4625]: I1202 14:13:49.091363 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-b2wb8"] Dec 02 14:13:50 crc kubenswrapper[4625]: I1202 14:13:50.868496 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88666e3c-ac66-4570-a1c1-f4062cae6533" path="/var/lib/kubelet/pods/88666e3c-ac66-4570-a1c1-f4062cae6533/volumes" Dec 02 14:13:50 crc kubenswrapper[4625]: I1202 14:13:50.869509 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7d9fa80-0d4f-4434-aae8-a25fffb891f2" path="/var/lib/kubelet/pods/a7d9fa80-0d4f-4434-aae8-a25fffb891f2/volumes" Dec 02 14:13:53 crc kubenswrapper[4625]: I1202 14:13:53.856657 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:13:53 crc kubenswrapper[4625]: E1202 14:13:53.857909 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:13:57 crc kubenswrapper[4625]: I1202 14:13:57.036119 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zdssn"] Dec 02 14:13:57 crc kubenswrapper[4625]: I1202 14:13:57.048745 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zdssn"] Dec 02 14:13:58 crc kubenswrapper[4625]: I1202 14:13:58.037083 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8688-account-create-update-c46p5"] Dec 02 14:13:58 crc kubenswrapper[4625]: I1202 14:13:58.053500 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8688-account-create-update-c46p5"] Dec 02 14:13:58 crc kubenswrapper[4625]: I1202 14:13:58.871558 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bccdd3a9-0e57-475b-a410-d1f6580c1ff5" path="/var/lib/kubelet/pods/bccdd3a9-0e57-475b-a410-d1f6580c1ff5/volumes" Dec 02 14:13:58 crc kubenswrapper[4625]: I1202 14:13:58.872530 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fd0764-6795-48ad-b740-98ee968ba808" path="/var/lib/kubelet/pods/f3fd0764-6795-48ad-b740-98ee968ba808/volumes" Dec 02 14:13:59 crc kubenswrapper[4625]: I1202 14:13:59.033460 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zlpv9"] Dec 02 14:13:59 crc kubenswrapper[4625]: I1202 14:13:59.047028 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e870-account-create-update-jgjb6"] Dec 02 14:13:59 crc kubenswrapper[4625]: I1202 14:13:59.058273 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zlpv9"] Dec 02 14:13:59 crc kubenswrapper[4625]: I1202 14:13:59.068109 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e870-account-create-update-jgjb6"] Dec 02 14:14:00 crc kubenswrapper[4625]: I1202 14:14:00.869666 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5258050d-85d9-4593-8d3b-64772e76fcf5" path="/var/lib/kubelet/pods/5258050d-85d9-4593-8d3b-64772e76fcf5/volumes" Dec 02 14:14:00 crc kubenswrapper[4625]: I1202 14:14:00.870817 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c" path="/var/lib/kubelet/pods/b65202fa-3a35-4ea2-8ef7-7a8ed9ac834c/volumes" Dec 02 14:14:06 crc kubenswrapper[4625]: I1202 14:14:06.859030 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:14:06 crc kubenswrapper[4625]: E1202 14:14:06.860176 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:14:17 crc kubenswrapper[4625]: I1202 14:14:17.867104 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:14:17 crc kubenswrapper[4625]: E1202 14:14:17.868165 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.072482 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4jlmz"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.086341 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2866-account-create-update-4f925"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.101100 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-29vvf"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.112172 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ceca-account-create-update-dhx9s"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.124369 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-556e-account-create-update-wgkgf"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.138261 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-556e-account-create-update-wgkgf"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.164183 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2866-account-create-update-4f925"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.170531 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-29vvf"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.195387 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vdstw"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.211631 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4jlmz"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.224658 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ceca-account-create-update-dhx9s"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.235509 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vdstw"] Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.871621 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28e8e120-7dce-4848-ad79-d3b0ad33a778" path="/var/lib/kubelet/pods/28e8e120-7dce-4848-ad79-d3b0ad33a778/volumes" Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.874105 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4349bf5c-375b-45aa-b845-70ea55a35bf9" path="/var/lib/kubelet/pods/4349bf5c-375b-45aa-b845-70ea55a35bf9/volumes" Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.875244 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d57ba63-e3b3-40a9-b5a5-88c6654b04fe" path="/var/lib/kubelet/pods/4d57ba63-e3b3-40a9-b5a5-88c6654b04fe/volumes" Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.877675 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be753d78-885f-4972-9174-039b19cf978e" path="/var/lib/kubelet/pods/be753d78-885f-4972-9174-039b19cf978e/volumes" Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.878544 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab9b0dc-588c-4c86-a1de-25e3c71dce53" path="/var/lib/kubelet/pods/cab9b0dc-588c-4c86-a1de-25e3c71dce53/volumes" Dec 02 14:14:24 crc kubenswrapper[4625]: I1202 14:14:24.879218 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d506dcc8-4877-472a-ad57-f88d656e84f3" path="/var/lib/kubelet/pods/d506dcc8-4877-472a-ad57-f88d656e84f3/volumes" Dec 02 14:14:31 crc kubenswrapper[4625]: I1202 14:14:31.857838 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:14:31 crc kubenswrapper[4625]: E1202 14:14:31.859275 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.051415 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-flqpl"] Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.062699 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-flqpl"] Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.438179 4625 scope.go:117] "RemoveContainer" containerID="029171181afd062a1b5c55b62e7578673c03215f5a417be7a7c9bd392e1e2031" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.462337 4625 scope.go:117] "RemoveContainer" containerID="461c387bdc655943790e15d9540cfd547337fd3f11ffc4d78fd9d65ef77a37cf" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.565087 4625 scope.go:117] "RemoveContainer" containerID="da3a5d21fd9235617d2b65ad44ec840c7571abbbac8f437f39b5399d6af8ee87" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.622560 4625 scope.go:117] "RemoveContainer" containerID="8fbbbc2c13bcd4882f58f5404c9d83abfecbfd7746bc898de7e774fa5448e242" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.675915 4625 scope.go:117] "RemoveContainer" containerID="4994b4305dd73352b58dbdc7fe58d23779c19a9ef328acb387eb08aa24371813" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.711512 4625 scope.go:117] "RemoveContainer" containerID="2ae2b3a57540d92f59eca7134bf8d8a5bb0f6d0f32b327e909a9dbab809fc78f" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.768566 4625 scope.go:117] "RemoveContainer" containerID="51c0c00ed055b8bf0c37ebf84a7952efa79cb3c1cd36c31368cbefc23891dfbd" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.810914 4625 scope.go:117] "RemoveContainer" containerID="70d3f960f2975ca1143d1a70c4d63404ca8378daf82a55f630ffaba9fb28b594" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.851577 4625 scope.go:117] "RemoveContainer" containerID="c8e01ffecb4fa4d88e000b1e842482431b7a54db23ff08b9840374a40c2be5fe" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.867408 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1" path="/var/lib/kubelet/pods/f241e835-4f8c-4ef3-9e74-0e1f3ff87ad1/volumes" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.883831 4625 scope.go:117] "RemoveContainer" containerID="8d8fedc6ad0e0f9647f59260e7936d338d63f2252c29cb096e1f2bbfc44a523d" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.908714 4625 scope.go:117] "RemoveContainer" containerID="76b64dbb33f1a96b0917cd51632bfb331aa3ddc10024bcc5b50180bc5c56cc42" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.932078 4625 scope.go:117] "RemoveContainer" containerID="7d08095349d9631495e96eaa665d331969670996d29344ce0171c008cac42eef" Dec 02 14:14:34 crc kubenswrapper[4625]: I1202 14:14:34.954485 4625 scope.go:117] "RemoveContainer" containerID="1223524f711e9be861c19e77223ac5505f483b6dc7b024e4f119f254a905a4b4" Dec 02 14:14:45 crc kubenswrapper[4625]: I1202 14:14:45.856227 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:14:45 crc kubenswrapper[4625]: E1202 14:14:45.856936 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:14:54 crc kubenswrapper[4625]: I1202 14:14:54.510251 4625 generic.go:334] "Generic (PLEG): container finished" podID="b31c21d6-4087-4521-8566-14b2eeabb679" containerID="b2ff5f1ce0f33ecba4d959d9bd59ae7a92edb2ac3e00e7ebbcce86d6298ed334" exitCode=0 Dec 02 14:14:54 crc kubenswrapper[4625]: I1202 14:14:54.510413 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" event={"ID":"b31c21d6-4087-4521-8566-14b2eeabb679","Type":"ContainerDied","Data":"b2ff5f1ce0f33ecba4d959d9bd59ae7a92edb2ac3e00e7ebbcce86d6298ed334"} Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.025388 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.221768 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-bootstrap-combined-ca-bundle\") pod \"b31c21d6-4087-4521-8566-14b2eeabb679\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.222092 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-inventory\") pod \"b31c21d6-4087-4521-8566-14b2eeabb679\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.222354 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jntvr\" (UniqueName: \"kubernetes.io/projected/b31c21d6-4087-4521-8566-14b2eeabb679-kube-api-access-jntvr\") pod \"b31c21d6-4087-4521-8566-14b2eeabb679\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.222476 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-ssh-key\") pod \"b31c21d6-4087-4521-8566-14b2eeabb679\" (UID: \"b31c21d6-4087-4521-8566-14b2eeabb679\") " Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.228660 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b31c21d6-4087-4521-8566-14b2eeabb679" (UID: "b31c21d6-4087-4521-8566-14b2eeabb679"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.235687 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b31c21d6-4087-4521-8566-14b2eeabb679-kube-api-access-jntvr" (OuterVolumeSpecName: "kube-api-access-jntvr") pod "b31c21d6-4087-4521-8566-14b2eeabb679" (UID: "b31c21d6-4087-4521-8566-14b2eeabb679"). InnerVolumeSpecName "kube-api-access-jntvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.268557 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b31c21d6-4087-4521-8566-14b2eeabb679" (UID: "b31c21d6-4087-4521-8566-14b2eeabb679"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.272821 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-inventory" (OuterVolumeSpecName: "inventory") pod "b31c21d6-4087-4521-8566-14b2eeabb679" (UID: "b31c21d6-4087-4521-8566-14b2eeabb679"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.325595 4625 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.325636 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.325647 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jntvr\" (UniqueName: \"kubernetes.io/projected/b31c21d6-4087-4521-8566-14b2eeabb679-kube-api-access-jntvr\") on node \"crc\" DevicePath \"\"" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.325656 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b31c21d6-4087-4521-8566-14b2eeabb679-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.546764 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" event={"ID":"b31c21d6-4087-4521-8566-14b2eeabb679","Type":"ContainerDied","Data":"194af72dd24becfaa9f69749f19453cc5c6baaf20773ae693b02dbd4fd00db18"} Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.547108 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="194af72dd24becfaa9f69749f19453cc5c6baaf20773ae693b02dbd4fd00db18" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.546838 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.656083 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg"] Dec 02 14:14:56 crc kubenswrapper[4625]: E1202 14:14:56.656983 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9918c5-7ca4-4315-babb-40467da124a7" containerName="extract-utilities" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.657004 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9918c5-7ca4-4315-babb-40467da124a7" containerName="extract-utilities" Dec 02 14:14:56 crc kubenswrapper[4625]: E1202 14:14:56.657018 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9918c5-7ca4-4315-babb-40467da124a7" containerName="registry-server" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.657024 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9918c5-7ca4-4315-babb-40467da124a7" containerName="registry-server" Dec 02 14:14:56 crc kubenswrapper[4625]: E1202 14:14:56.657065 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9918c5-7ca4-4315-babb-40467da124a7" containerName="extract-content" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.657075 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9918c5-7ca4-4315-babb-40467da124a7" containerName="extract-content" Dec 02 14:14:56 crc kubenswrapper[4625]: E1202 14:14:56.657089 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31c21d6-4087-4521-8566-14b2eeabb679" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.657110 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31c21d6-4087-4521-8566-14b2eeabb679" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.657392 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="b31c21d6-4087-4521-8566-14b2eeabb679" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.657443 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9918c5-7ca4-4315-babb-40467da124a7" containerName="registry-server" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.660203 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.666264 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.666589 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.666713 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.666889 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.669026 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg"] Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.734142 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.734213 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6knft\" (UniqueName: \"kubernetes.io/projected/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-kube-api-access-6knft\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.734409 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.837041 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.837129 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6knft\" (UniqueName: \"kubernetes.io/projected/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-kube-api-access-6knft\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.837161 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.850485 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.850495 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.860594 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6knft\" (UniqueName: \"kubernetes.io/projected/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-kube-api-access-6knft\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:56 crc kubenswrapper[4625]: I1202 14:14:56.982107 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:14:57 crc kubenswrapper[4625]: I1202 14:14:57.649603 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg"] Dec 02 14:14:57 crc kubenswrapper[4625]: I1202 14:14:57.673547 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:14:58 crc kubenswrapper[4625]: I1202 14:14:58.569228 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" event={"ID":"1b3affb2-6aa8-445a-81cd-6bdb90c31f45","Type":"ContainerStarted","Data":"a473020928ba4460ac6a5016774544116e43c74898211d8cdd72adedc2ea35b0"} Dec 02 14:14:59 crc kubenswrapper[4625]: I1202 14:14:59.581338 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" event={"ID":"1b3affb2-6aa8-445a-81cd-6bdb90c31f45","Type":"ContainerStarted","Data":"98af97b5bfebb5c766229fe6cf4c6c6bc4b6766cef2238d664857efdd176b5f3"} Dec 02 14:14:59 crc kubenswrapper[4625]: I1202 14:14:59.611653 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" podStartSLOduration=2.910065376 podStartE2EDuration="3.611630221s" podCreationTimestamp="2025-12-02 14:14:56 +0000 UTC" firstStartedPulling="2025-12-02 14:14:57.673273214 +0000 UTC m=+1853.635450289" lastFinishedPulling="2025-12-02 14:14:58.374838059 +0000 UTC m=+1854.337015134" observedRunningTime="2025-12-02 14:14:59.609876584 +0000 UTC m=+1855.572053699" watchObservedRunningTime="2025-12-02 14:14:59.611630221 +0000 UTC m=+1855.573807296" Dec 02 14:14:59 crc kubenswrapper[4625]: I1202 14:14:59.856628 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:14:59 crc kubenswrapper[4625]: E1202 14:14:59.856840 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.178235 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st"] Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.180181 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.184402 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.195296 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st"] Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.196440 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.364385 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ks9t\" (UniqueName: \"kubernetes.io/projected/158a665a-fd81-4505-94cf-75154b25d97c-kube-api-access-9ks9t\") pod \"collect-profiles-29411415-6v4st\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.364748 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/158a665a-fd81-4505-94cf-75154b25d97c-secret-volume\") pod \"collect-profiles-29411415-6v4st\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.364974 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/158a665a-fd81-4505-94cf-75154b25d97c-config-volume\") pod \"collect-profiles-29411415-6v4st\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.466919 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ks9t\" (UniqueName: \"kubernetes.io/projected/158a665a-fd81-4505-94cf-75154b25d97c-kube-api-access-9ks9t\") pod \"collect-profiles-29411415-6v4st\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.467262 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/158a665a-fd81-4505-94cf-75154b25d97c-secret-volume\") pod \"collect-profiles-29411415-6v4st\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.467453 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/158a665a-fd81-4505-94cf-75154b25d97c-config-volume\") pod \"collect-profiles-29411415-6v4st\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.468821 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/158a665a-fd81-4505-94cf-75154b25d97c-config-volume\") pod \"collect-profiles-29411415-6v4st\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.473237 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/158a665a-fd81-4505-94cf-75154b25d97c-secret-volume\") pod \"collect-profiles-29411415-6v4st\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.489544 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ks9t\" (UniqueName: \"kubernetes.io/projected/158a665a-fd81-4505-94cf-75154b25d97c-kube-api-access-9ks9t\") pod \"collect-profiles-29411415-6v4st\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:00 crc kubenswrapper[4625]: I1202 14:15:00.506080 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:01 crc kubenswrapper[4625]: I1202 14:15:01.064896 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st"] Dec 02 14:15:01 crc kubenswrapper[4625]: I1202 14:15:01.602670 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" event={"ID":"158a665a-fd81-4505-94cf-75154b25d97c","Type":"ContainerStarted","Data":"b88c43fe6bb15f55a2c78c390bf6167e9be7c44f0a493bae006b55eb7427ba96"} Dec 02 14:15:01 crc kubenswrapper[4625]: I1202 14:15:01.602952 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" event={"ID":"158a665a-fd81-4505-94cf-75154b25d97c","Type":"ContainerStarted","Data":"6d68faa83d4473f538f46df4c1fe9382cfcd96435bf6d369d5caf303840c92ff"} Dec 02 14:15:01 crc kubenswrapper[4625]: I1202 14:15:01.628425 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" podStartSLOduration=1.6283772490000001 podStartE2EDuration="1.628377249s" podCreationTimestamp="2025-12-02 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:15:01.626794535 +0000 UTC m=+1857.588971610" watchObservedRunningTime="2025-12-02 14:15:01.628377249 +0000 UTC m=+1857.590554324" Dec 02 14:15:02 crc kubenswrapper[4625]: I1202 14:15:02.621438 4625 generic.go:334] "Generic (PLEG): container finished" podID="158a665a-fd81-4505-94cf-75154b25d97c" containerID="b88c43fe6bb15f55a2c78c390bf6167e9be7c44f0a493bae006b55eb7427ba96" exitCode=0 Dec 02 14:15:02 crc kubenswrapper[4625]: I1202 14:15:02.621500 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" event={"ID":"158a665a-fd81-4505-94cf-75154b25d97c","Type":"ContainerDied","Data":"b88c43fe6bb15f55a2c78c390bf6167e9be7c44f0a493bae006b55eb7427ba96"} Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.029424 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.157424 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ks9t\" (UniqueName: \"kubernetes.io/projected/158a665a-fd81-4505-94cf-75154b25d97c-kube-api-access-9ks9t\") pod \"158a665a-fd81-4505-94cf-75154b25d97c\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.157785 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/158a665a-fd81-4505-94cf-75154b25d97c-config-volume\") pod \"158a665a-fd81-4505-94cf-75154b25d97c\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.226417 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158a665a-fd81-4505-94cf-75154b25d97c-config-volume" (OuterVolumeSpecName: "config-volume") pod "158a665a-fd81-4505-94cf-75154b25d97c" (UID: "158a665a-fd81-4505-94cf-75154b25d97c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.241666 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158a665a-fd81-4505-94cf-75154b25d97c-kube-api-access-9ks9t" (OuterVolumeSpecName: "kube-api-access-9ks9t") pod "158a665a-fd81-4505-94cf-75154b25d97c" (UID: "158a665a-fd81-4505-94cf-75154b25d97c"). InnerVolumeSpecName "kube-api-access-9ks9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.253129 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/158a665a-fd81-4505-94cf-75154b25d97c-secret-volume\") pod \"158a665a-fd81-4505-94cf-75154b25d97c\" (UID: \"158a665a-fd81-4505-94cf-75154b25d97c\") " Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.254216 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ks9t\" (UniqueName: \"kubernetes.io/projected/158a665a-fd81-4505-94cf-75154b25d97c-kube-api-access-9ks9t\") on node \"crc\" DevicePath \"\"" Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.254230 4625 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/158a665a-fd81-4505-94cf-75154b25d97c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.263252 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158a665a-fd81-4505-94cf-75154b25d97c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "158a665a-fd81-4505-94cf-75154b25d97c" (UID: "158a665a-fd81-4505-94cf-75154b25d97c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.356725 4625 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/158a665a-fd81-4505-94cf-75154b25d97c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.643696 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" event={"ID":"158a665a-fd81-4505-94cf-75154b25d97c","Type":"ContainerDied","Data":"6d68faa83d4473f538f46df4c1fe9382cfcd96435bf6d369d5caf303840c92ff"} Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.643744 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d68faa83d4473f538f46df4c1fe9382cfcd96435bf6d369d5caf303840c92ff" Dec 02 14:15:04 crc kubenswrapper[4625]: I1202 14:15:04.643765 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st" Dec 02 14:15:11 crc kubenswrapper[4625]: I1202 14:15:11.857371 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:15:11 crc kubenswrapper[4625]: E1202 14:15:11.858496 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:15:24 crc kubenswrapper[4625]: I1202 14:15:24.078233 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-tpfs8"] Dec 02 14:15:24 crc kubenswrapper[4625]: I1202 14:15:24.094924 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6xqcm"] Dec 02 14:15:24 crc kubenswrapper[4625]: I1202 14:15:24.105823 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-tpfs8"] Dec 02 14:15:24 crc kubenswrapper[4625]: I1202 14:15:24.119585 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6xqcm"] Dec 02 14:15:24 crc kubenswrapper[4625]: I1202 14:15:24.878737 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:15:24 crc kubenswrapper[4625]: E1202 14:15:24.879418 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:15:24 crc kubenswrapper[4625]: I1202 14:15:24.880174 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="314f653d-9ec6-47e4-af2a-aadc2440d332" path="/var/lib/kubelet/pods/314f653d-9ec6-47e4-af2a-aadc2440d332/volumes" Dec 02 14:15:24 crc kubenswrapper[4625]: I1202 14:15:24.881499 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506784cb-9737-438b-bd53-4719527b47bf" path="/var/lib/kubelet/pods/506784cb-9737-438b-bd53-4719527b47bf/volumes" Dec 02 14:15:35 crc kubenswrapper[4625]: I1202 14:15:35.230678 4625 scope.go:117] "RemoveContainer" containerID="d0493e0f0a3c8678839326776275f5db0684b9f0dfa81b8729b8bc8fc7e290d8" Dec 02 14:15:35 crc kubenswrapper[4625]: I1202 14:15:35.270393 4625 scope.go:117] "RemoveContainer" containerID="631d79035e0e3fd8d298c70b03579590f9ba71682e1ded470cdf3dc32d86f038" Dec 02 14:15:35 crc kubenswrapper[4625]: I1202 14:15:35.857078 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:15:35 crc kubenswrapper[4625]: E1202 14:15:35.857948 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:15:38 crc kubenswrapper[4625]: I1202 14:15:38.046410 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2swhw"] Dec 02 14:15:38 crc kubenswrapper[4625]: I1202 14:15:38.059271 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2swhw"] Dec 02 14:15:38 crc kubenswrapper[4625]: I1202 14:15:38.872896 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7887abf-7df6-4058-b3f0-e58295b168c1" path="/var/lib/kubelet/pods/d7887abf-7df6-4058-b3f0-e58295b168c1/volumes" Dec 02 14:15:40 crc kubenswrapper[4625]: I1202 14:15:40.037896 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fdkpr"] Dec 02 14:15:40 crc kubenswrapper[4625]: I1202 14:15:40.049890 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fdkpr"] Dec 02 14:15:40 crc kubenswrapper[4625]: I1202 14:15:40.873872 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b840d2-7458-4769-9650-e62ff8676008" path="/var/lib/kubelet/pods/c2b840d2-7458-4769-9650-e62ff8676008/volumes" Dec 02 14:15:45 crc kubenswrapper[4625]: I1202 14:15:45.063386 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-w2qlr"] Dec 02 14:15:45 crc kubenswrapper[4625]: I1202 14:15:45.074593 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-w2qlr"] Dec 02 14:15:46 crc kubenswrapper[4625]: I1202 14:15:46.869395 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57d79055-dea2-4cd4-a642-b63d2deaa339" path="/var/lib/kubelet/pods/57d79055-dea2-4cd4-a642-b63d2deaa339/volumes" Dec 02 14:15:49 crc kubenswrapper[4625]: I1202 14:15:49.857044 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:15:49 crc kubenswrapper[4625]: E1202 14:15:49.858090 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:16:04 crc kubenswrapper[4625]: I1202 14:16:04.866546 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:16:04 crc kubenswrapper[4625]: E1202 14:16:04.867819 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:16:09 crc kubenswrapper[4625]: I1202 14:16:09.069903 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-blvbp"] Dec 02 14:16:09 crc kubenswrapper[4625]: I1202 14:16:09.081066 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-blvbp"] Dec 02 14:16:10 crc kubenswrapper[4625]: I1202 14:16:10.875077 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29ce362-3978-4713-833d-49aab29a394c" path="/var/lib/kubelet/pods/c29ce362-3978-4713-833d-49aab29a394c/volumes" Dec 02 14:16:19 crc kubenswrapper[4625]: I1202 14:16:19.857720 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:16:19 crc kubenswrapper[4625]: E1202 14:16:19.859010 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:16:32 crc kubenswrapper[4625]: I1202 14:16:32.856403 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:16:32 crc kubenswrapper[4625]: E1202 14:16:32.857188 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:16:35 crc kubenswrapper[4625]: I1202 14:16:35.425428 4625 scope.go:117] "RemoveContainer" containerID="da74b31f811c5383a2f75c8c86c15ae7cd8f586c752c735f8fdb5f8d2612694e" Dec 02 14:16:35 crc kubenswrapper[4625]: I1202 14:16:35.476810 4625 scope.go:117] "RemoveContainer" containerID="215f9812fe59267de0032da8a69efef7f944d128864bdbd4cf383ef1b0597e2e" Dec 02 14:16:35 crc kubenswrapper[4625]: I1202 14:16:35.524215 4625 scope.go:117] "RemoveContainer" containerID="09a925c91fc8440516ea11a14fc8c3dfdcb74f05f26fc5cde8087f4c21ccbf41" Dec 02 14:16:35 crc kubenswrapper[4625]: I1202 14:16:35.633808 4625 scope.go:117] "RemoveContainer" containerID="acdbb2c0f1e52e4882ccdc4b4a4b5fbd2a4a4d758a4cf2681f66f20ea8948a52" Dec 02 14:16:35 crc kubenswrapper[4625]: I1202 14:16:35.659440 4625 scope.go:117] "RemoveContainer" containerID="893172c1648c0029902621395892771df41fb07b84730fb9215235b0335e2c67" Dec 02 14:16:35 crc kubenswrapper[4625]: I1202 14:16:35.691703 4625 scope.go:117] "RemoveContainer" containerID="a41ab8cb7c2ffd0c4a687a6224d5ed95612d0f8a21ba32f3a5fedad5376a6a04" Dec 02 14:16:43 crc kubenswrapper[4625]: I1202 14:16:43.856418 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:16:43 crc kubenswrapper[4625]: E1202 14:16:43.857851 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:16:56 crc kubenswrapper[4625]: I1202 14:16:56.857448 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:16:56 crc kubenswrapper[4625]: E1202 14:16:56.858336 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.029260 4625 generic.go:334] "Generic (PLEG): container finished" podID="1b3affb2-6aa8-445a-81cd-6bdb90c31f45" containerID="98af97b5bfebb5c766229fe6cf4c6c6bc4b6766cef2238d664857efdd176b5f3" exitCode=0 Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.029351 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" event={"ID":"1b3affb2-6aa8-445a-81cd-6bdb90c31f45","Type":"ContainerDied","Data":"98af97b5bfebb5c766229fe6cf4c6c6bc4b6766cef2238d664857efdd176b5f3"} Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.080796 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2308-account-create-update-d8cph"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.093816 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-12a0-account-create-update-2jk9s"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.101361 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jgkc9"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.112378 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2308-account-create-update-d8cph"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.125151 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-12a0-account-create-update-2jk9s"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.133079 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jgkc9"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.140063 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6cf2-account-create-update-jgt9w"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.146710 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bkp5f"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.153129 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lf6rg"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.159261 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bkp5f"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.167074 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6cf2-account-create-update-jgt9w"] Dec 02 14:16:59 crc kubenswrapper[4625]: I1202 14:16:59.173456 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lf6rg"] Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.510482 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.665486 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-inventory\") pod \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.665876 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-ssh-key\") pod \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.666001 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6knft\" (UniqueName: \"kubernetes.io/projected/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-kube-api-access-6knft\") pod \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\" (UID: \"1b3affb2-6aa8-445a-81cd-6bdb90c31f45\") " Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.677741 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-kube-api-access-6knft" (OuterVolumeSpecName: "kube-api-access-6knft") pod "1b3affb2-6aa8-445a-81cd-6bdb90c31f45" (UID: "1b3affb2-6aa8-445a-81cd-6bdb90c31f45"). InnerVolumeSpecName "kube-api-access-6knft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.700554 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-inventory" (OuterVolumeSpecName: "inventory") pod "1b3affb2-6aa8-445a-81cd-6bdb90c31f45" (UID: "1b3affb2-6aa8-445a-81cd-6bdb90c31f45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.710417 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1b3affb2-6aa8-445a-81cd-6bdb90c31f45" (UID: "1b3affb2-6aa8-445a-81cd-6bdb90c31f45"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.770272 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.770345 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6knft\" (UniqueName: \"kubernetes.io/projected/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-kube-api-access-6knft\") on node \"crc\" DevicePath \"\"" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.770392 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b3affb2-6aa8-445a-81cd-6bdb90c31f45-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.870223 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64b5aba9-30dc-4ef5-a103-c0a1a264abb1" path="/var/lib/kubelet/pods/64b5aba9-30dc-4ef5-a103-c0a1a264abb1/volumes" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.871289 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f4615e-8b8d-420d-ad49-9eaf51763d66" path="/var/lib/kubelet/pods/83f4615e-8b8d-420d-ad49-9eaf51763d66/volumes" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.872349 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a6db25-9cc7-4109-9aac-0b1b13e9082d" path="/var/lib/kubelet/pods/c3a6db25-9cc7-4109-9aac-0b1b13e9082d/volumes" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.874286 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7acbd1-38a2-4fae-8c70-8d562c580274" path="/var/lib/kubelet/pods/ce7acbd1-38a2-4fae-8c70-8d562c580274/volumes" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.876281 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3af4557-685c-47a8-b194-2eaa04edad39" path="/var/lib/kubelet/pods/d3af4557-685c-47a8-b194-2eaa04edad39/volumes" Dec 02 14:17:00 crc kubenswrapper[4625]: I1202 14:17:00.877382 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fec66282-38cc-4eb2-ac47-c1a5a7b377f2" path="/var/lib/kubelet/pods/fec66282-38cc-4eb2-ac47-c1a5a7b377f2/volumes" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.053745 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" event={"ID":"1b3affb2-6aa8-445a-81cd-6bdb90c31f45","Type":"ContainerDied","Data":"a473020928ba4460ac6a5016774544116e43c74898211d8cdd72adedc2ea35b0"} Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.053807 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a473020928ba4460ac6a5016774544116e43c74898211d8cdd72adedc2ea35b0" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.053956 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.209999 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw"] Dec 02 14:17:01 crc kubenswrapper[4625]: E1202 14:17:01.210730 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3affb2-6aa8-445a-81cd-6bdb90c31f45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.210764 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3affb2-6aa8-445a-81cd-6bdb90c31f45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 02 14:17:01 crc kubenswrapper[4625]: E1202 14:17:01.210814 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158a665a-fd81-4505-94cf-75154b25d97c" containerName="collect-profiles" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.210824 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="158a665a-fd81-4505-94cf-75154b25d97c" containerName="collect-profiles" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.211156 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3affb2-6aa8-445a-81cd-6bdb90c31f45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.211202 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="158a665a-fd81-4505-94cf-75154b25d97c" containerName="collect-profiles" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.212251 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.215888 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.216244 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.216422 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.220538 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.230924 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw"] Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.283206 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxxv5\" (UniqueName: \"kubernetes.io/projected/4ab12756-db3d-4271-9017-d059eb68113e-kube-api-access-xxxv5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.283275 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.283342 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.386146 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.386342 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxxv5\" (UniqueName: \"kubernetes.io/projected/4ab12756-db3d-4271-9017-d059eb68113e-kube-api-access-xxxv5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.386380 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.422727 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.433000 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.434078 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxxv5\" (UniqueName: \"kubernetes.io/projected/4ab12756-db3d-4271-9017-d059eb68113e-kube-api-access-xxxv5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:01 crc kubenswrapper[4625]: I1202 14:17:01.532678 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:17:02 crc kubenswrapper[4625]: I1202 14:17:02.381905 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw"] Dec 02 14:17:03 crc kubenswrapper[4625]: I1202 14:17:03.080403 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" event={"ID":"4ab12756-db3d-4271-9017-d059eb68113e","Type":"ContainerStarted","Data":"3a670a6e3cddcf16639fd091dcb5399ebe5b5c7932cd2001c2f6a01ab5014859"} Dec 02 14:17:04 crc kubenswrapper[4625]: I1202 14:17:04.092252 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" event={"ID":"4ab12756-db3d-4271-9017-d059eb68113e","Type":"ContainerStarted","Data":"d5ef0db64f47dfccb3a0edb75f79f4cc8e66f8158bdf3ea9dc72df356f111e29"} Dec 02 14:17:04 crc kubenswrapper[4625]: I1202 14:17:04.115344 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" podStartSLOduration=2.357978737 podStartE2EDuration="3.115268511s" podCreationTimestamp="2025-12-02 14:17:01 +0000 UTC" firstStartedPulling="2025-12-02 14:17:02.390555245 +0000 UTC m=+1978.352732320" lastFinishedPulling="2025-12-02 14:17:03.147845009 +0000 UTC m=+1979.110022094" observedRunningTime="2025-12-02 14:17:04.11182656 +0000 UTC m=+1980.074003645" watchObservedRunningTime="2025-12-02 14:17:04.115268511 +0000 UTC m=+1980.077445606" Dec 02 14:17:07 crc kubenswrapper[4625]: I1202 14:17:07.857897 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:17:07 crc kubenswrapper[4625]: E1202 14:17:07.858687 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:17:22 crc kubenswrapper[4625]: I1202 14:17:22.857085 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:17:23 crc kubenswrapper[4625]: I1202 14:17:23.431827 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"d3a735a844bd9bd376fbebb84fff6d0aa76d54ffd47aa4fc8c0675440ff0acf9"} Dec 02 14:17:35 crc kubenswrapper[4625]: I1202 14:17:35.845474 4625 scope.go:117] "RemoveContainer" containerID="69b9cbc4abd6d33cbdbf9c20ce48a060e1bcdb66192afa7b34d2e9a6bf55e83e" Dec 02 14:17:35 crc kubenswrapper[4625]: I1202 14:17:35.883369 4625 scope.go:117] "RemoveContainer" containerID="98bd38813d1817927ebb6af84015f6a4b5b9caa81bb8f6b56728bad74021d524" Dec 02 14:17:35 crc kubenswrapper[4625]: I1202 14:17:35.926837 4625 scope.go:117] "RemoveContainer" containerID="854726dfb2f31ff83e9ad58c6f787945c4c9eed41573830673d5b298b2a7b039" Dec 02 14:17:35 crc kubenswrapper[4625]: I1202 14:17:35.991052 4625 scope.go:117] "RemoveContainer" containerID="4a83870b658fe7ad2faf87d5486746ed4cc4de27850d3f72ea65f26575fbe82f" Dec 02 14:17:36 crc kubenswrapper[4625]: I1202 14:17:36.051072 4625 scope.go:117] "RemoveContainer" containerID="ce09e6ebbaa76dcacb9dc6d407a3c9edfee8d996dc5e0d51efee099aa69a8c0e" Dec 02 14:17:36 crc kubenswrapper[4625]: I1202 14:17:36.095713 4625 scope.go:117] "RemoveContainer" containerID="abf88742c2064c436a981cfe86478fa9b30e36d7cf552b1324e8f7e4e1bc163d" Dec 02 14:17:39 crc kubenswrapper[4625]: I1202 14:17:39.053819 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-52qdn"] Dec 02 14:17:39 crc kubenswrapper[4625]: I1202 14:17:39.073745 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-52qdn"] Dec 02 14:17:40 crc kubenswrapper[4625]: I1202 14:17:40.869108 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f780395c-9363-4a42-9f25-f7ad97bc51b3" path="/var/lib/kubelet/pods/f780395c-9363-4a42-9f25-f7ad97bc51b3/volumes" Dec 02 14:18:18 crc kubenswrapper[4625]: I1202 14:18:18.082909 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sgm2m"] Dec 02 14:18:18 crc kubenswrapper[4625]: I1202 14:18:18.096975 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-p9qz6"] Dec 02 14:18:18 crc kubenswrapper[4625]: I1202 14:18:18.110603 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sgm2m"] Dec 02 14:18:18 crc kubenswrapper[4625]: I1202 14:18:18.120956 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-p9qz6"] Dec 02 14:18:19 crc kubenswrapper[4625]: I1202 14:18:19.027775 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cbe18c7-0a3f-4a85-91fe-79dd6af095ab" path="/var/lib/kubelet/pods/3cbe18c7-0a3f-4a85-91fe-79dd6af095ab/volumes" Dec 02 14:18:19 crc kubenswrapper[4625]: I1202 14:18:19.028700 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b257027-4621-495c-b675-99d14b598340" path="/var/lib/kubelet/pods/5b257027-4621-495c-b675-99d14b598340/volumes" Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.421181 4625 generic.go:334] "Generic (PLEG): container finished" podID="4ab12756-db3d-4271-9017-d059eb68113e" containerID="d5ef0db64f47dfccb3a0edb75f79f4cc8e66f8158bdf3ea9dc72df356f111e29" exitCode=0 Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.421394 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" event={"ID":"4ab12756-db3d-4271-9017-d059eb68113e","Type":"ContainerDied","Data":"d5ef0db64f47dfccb3a0edb75f79f4cc8e66f8158bdf3ea9dc72df356f111e29"} Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.808732 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z77h2"] Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.811943 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.828559 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z77h2"] Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.889721 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-catalog-content\") pod \"community-operators-z77h2\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.890205 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-utilities\") pod \"community-operators-z77h2\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.890359 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc6ph\" (UniqueName: \"kubernetes.io/projected/7caa849c-f574-482f-bf69-320ed7e40cfc-kube-api-access-nc6ph\") pod \"community-operators-z77h2\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.992064 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-catalog-content\") pod \"community-operators-z77h2\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.992126 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-utilities\") pod \"community-operators-z77h2\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.992218 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc6ph\" (UniqueName: \"kubernetes.io/projected/7caa849c-f574-482f-bf69-320ed7e40cfc-kube-api-access-nc6ph\") pod \"community-operators-z77h2\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.993180 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-catalog-content\") pod \"community-operators-z77h2\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:26 crc kubenswrapper[4625]: I1202 14:18:26.993504 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-utilities\") pod \"community-operators-z77h2\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:27 crc kubenswrapper[4625]: I1202 14:18:27.025055 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc6ph\" (UniqueName: \"kubernetes.io/projected/7caa849c-f574-482f-bf69-320ed7e40cfc-kube-api-access-nc6ph\") pod \"community-operators-z77h2\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:27 crc kubenswrapper[4625]: I1202 14:18:27.135395 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:27 crc kubenswrapper[4625]: I1202 14:18:27.784673 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z77h2"] Dec 02 14:18:27 crc kubenswrapper[4625]: W1202 14:18:27.804802 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7caa849c_f574_482f_bf69_320ed7e40cfc.slice/crio-3a7a437a52cb987a40481e3a38dcf043eea690fc80e047373d04d3c9e3aeff3d WatchSource:0}: Error finding container 3a7a437a52cb987a40481e3a38dcf043eea690fc80e047373d04d3c9e3aeff3d: Status 404 returned error can't find the container with id 3a7a437a52cb987a40481e3a38dcf043eea690fc80e047373d04d3c9e3aeff3d Dec 02 14:18:27 crc kubenswrapper[4625]: I1202 14:18:27.980119 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.038216 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxxv5\" (UniqueName: \"kubernetes.io/projected/4ab12756-db3d-4271-9017-d059eb68113e-kube-api-access-xxxv5\") pod \"4ab12756-db3d-4271-9017-d059eb68113e\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.038384 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-ssh-key\") pod \"4ab12756-db3d-4271-9017-d059eb68113e\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.038445 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-inventory\") pod \"4ab12756-db3d-4271-9017-d059eb68113e\" (UID: \"4ab12756-db3d-4271-9017-d059eb68113e\") " Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.055204 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab12756-db3d-4271-9017-d059eb68113e-kube-api-access-xxxv5" (OuterVolumeSpecName: "kube-api-access-xxxv5") pod "4ab12756-db3d-4271-9017-d059eb68113e" (UID: "4ab12756-db3d-4271-9017-d059eb68113e"). InnerVolumeSpecName "kube-api-access-xxxv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.084458 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4ab12756-db3d-4271-9017-d059eb68113e" (UID: "4ab12756-db3d-4271-9017-d059eb68113e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.087675 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-inventory" (OuterVolumeSpecName: "inventory") pod "4ab12756-db3d-4271-9017-d059eb68113e" (UID: "4ab12756-db3d-4271-9017-d059eb68113e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.141607 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxxv5\" (UniqueName: \"kubernetes.io/projected/4ab12756-db3d-4271-9017-d059eb68113e-kube-api-access-xxxv5\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.141903 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.141962 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ab12756-db3d-4271-9017-d059eb68113e-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.455367 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" event={"ID":"4ab12756-db3d-4271-9017-d059eb68113e","Type":"ContainerDied","Data":"3a670a6e3cddcf16639fd091dcb5399ebe5b5c7932cd2001c2f6a01ab5014859"} Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.455933 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a670a6e3cddcf16639fd091dcb5399ebe5b5c7932cd2001c2f6a01ab5014859" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.455467 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.458674 4625 generic.go:334] "Generic (PLEG): container finished" podID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerID="70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44" exitCode=0 Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.458857 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z77h2" event={"ID":"7caa849c-f574-482f-bf69-320ed7e40cfc","Type":"ContainerDied","Data":"70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44"} Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.460484 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z77h2" event={"ID":"7caa849c-f574-482f-bf69-320ed7e40cfc","Type":"ContainerStarted","Data":"3a7a437a52cb987a40481e3a38dcf043eea690fc80e047373d04d3c9e3aeff3d"} Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.607535 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm"] Dec 02 14:18:28 crc kubenswrapper[4625]: E1202 14:18:28.608945 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab12756-db3d-4271-9017-d059eb68113e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.609089 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab12756-db3d-4271-9017-d059eb68113e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.609485 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab12756-db3d-4271-9017-d059eb68113e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.611887 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.616624 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm"] Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.621115 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.621819 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.622024 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.622155 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.659877 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.678738 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkjw\" (UniqueName: \"kubernetes.io/projected/4b2123a9-3349-49ed-a533-b0550d7babc0-kube-api-access-tpkjw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.678881 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.781733 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpkjw\" (UniqueName: \"kubernetes.io/projected/4b2123a9-3349-49ed-a533-b0550d7babc0-kube-api-access-tpkjw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.781842 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.781963 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.788470 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.789337 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.807238 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpkjw\" (UniqueName: \"kubernetes.io/projected/4b2123a9-3349-49ed-a533-b0550d7babc0-kube-api-access-tpkjw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:28 crc kubenswrapper[4625]: I1202 14:18:28.940149 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:29 crc kubenswrapper[4625]: I1202 14:18:29.472562 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z77h2" event={"ID":"7caa849c-f574-482f-bf69-320ed7e40cfc","Type":"ContainerStarted","Data":"c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d"} Dec 02 14:18:29 crc kubenswrapper[4625]: I1202 14:18:29.588997 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm"] Dec 02 14:18:29 crc kubenswrapper[4625]: I1202 14:18:29.807607 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l4b59"] Dec 02 14:18:29 crc kubenswrapper[4625]: I1202 14:18:29.810548 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:29 crc kubenswrapper[4625]: I1202 14:18:29.821416 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4b59"] Dec 02 14:18:29 crc kubenswrapper[4625]: I1202 14:18:29.911199 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-catalog-content\") pod \"redhat-operators-l4b59\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:29 crc kubenswrapper[4625]: I1202 14:18:29.911244 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-utilities\") pod \"redhat-operators-l4b59\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:29 crc kubenswrapper[4625]: I1202 14:18:29.911450 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt8n8\" (UniqueName: \"kubernetes.io/projected/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-kube-api-access-wt8n8\") pod \"redhat-operators-l4b59\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:30 crc kubenswrapper[4625]: I1202 14:18:30.013872 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt8n8\" (UniqueName: \"kubernetes.io/projected/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-kube-api-access-wt8n8\") pod \"redhat-operators-l4b59\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:30 crc kubenswrapper[4625]: I1202 14:18:30.014002 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-catalog-content\") pod \"redhat-operators-l4b59\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:30 crc kubenswrapper[4625]: I1202 14:18:30.014027 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-utilities\") pod \"redhat-operators-l4b59\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:30 crc kubenswrapper[4625]: I1202 14:18:30.014541 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-utilities\") pod \"redhat-operators-l4b59\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:30 crc kubenswrapper[4625]: I1202 14:18:30.015226 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-catalog-content\") pod \"redhat-operators-l4b59\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:30 crc kubenswrapper[4625]: I1202 14:18:30.042006 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt8n8\" (UniqueName: \"kubernetes.io/projected/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-kube-api-access-wt8n8\") pod \"redhat-operators-l4b59\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:30 crc kubenswrapper[4625]: I1202 14:18:30.176187 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:30 crc kubenswrapper[4625]: I1202 14:18:30.484134 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" event={"ID":"4b2123a9-3349-49ed-a533-b0550d7babc0","Type":"ContainerStarted","Data":"13e6ee4fe4d0b66d326bb4116eee57abfbc965baef45f4060e09e539d513f11e"} Dec 02 14:18:32 crc kubenswrapper[4625]: I1202 14:18:32.779684 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="f8bda2bc-c054-4188-ad43-47b49dab4949" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 02 14:18:32 crc kubenswrapper[4625]: I1202 14:18:32.787767 4625 generic.go:334] "Generic (PLEG): container finished" podID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerID="c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d" exitCode=0 Dec 02 14:18:32 crc kubenswrapper[4625]: I1202 14:18:32.787799 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z77h2" event={"ID":"7caa849c-f574-482f-bf69-320ed7e40cfc","Type":"ContainerDied","Data":"c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d"} Dec 02 14:18:32 crc kubenswrapper[4625]: I1202 14:18:32.792883 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="f8bda2bc-c054-4188-ad43-47b49dab4949" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 02 14:18:32 crc kubenswrapper[4625]: I1202 14:18:32.816639 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4b59"] Dec 02 14:18:33 crc kubenswrapper[4625]: I1202 14:18:33.808572 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z77h2" event={"ID":"7caa849c-f574-482f-bf69-320ed7e40cfc","Type":"ContainerStarted","Data":"c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7"} Dec 02 14:18:33 crc kubenswrapper[4625]: I1202 14:18:33.812223 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" event={"ID":"4b2123a9-3349-49ed-a533-b0550d7babc0","Type":"ContainerStarted","Data":"4dd41ca1540734f665a7f65d56fe0df88dd7ff9cc42cc5cf3ea04f6d7c85b3fa"} Dec 02 14:18:33 crc kubenswrapper[4625]: I1202 14:18:33.814271 4625 generic.go:334] "Generic (PLEG): container finished" podID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerID="6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e" exitCode=0 Dec 02 14:18:33 crc kubenswrapper[4625]: I1202 14:18:33.814349 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4b59" event={"ID":"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd","Type":"ContainerDied","Data":"6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e"} Dec 02 14:18:33 crc kubenswrapper[4625]: I1202 14:18:33.814370 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4b59" event={"ID":"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd","Type":"ContainerStarted","Data":"2f7a163bee8056d5d2ad9c4b103b274997e70c2a39c5f8a091a5d9f8c187eee3"} Dec 02 14:18:33 crc kubenswrapper[4625]: I1202 14:18:33.857422 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z77h2" podStartSLOduration=3.023795641 podStartE2EDuration="7.857384676s" podCreationTimestamp="2025-12-02 14:18:26 +0000 UTC" firstStartedPulling="2025-12-02 14:18:28.461189383 +0000 UTC m=+2064.423366458" lastFinishedPulling="2025-12-02 14:18:33.294778428 +0000 UTC m=+2069.256955493" observedRunningTime="2025-12-02 14:18:33.837233219 +0000 UTC m=+2069.799410294" watchObservedRunningTime="2025-12-02 14:18:33.857384676 +0000 UTC m=+2069.819561761" Dec 02 14:18:33 crc kubenswrapper[4625]: I1202 14:18:33.878131 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" podStartSLOduration=4.398911953 podStartE2EDuration="5.878099678s" podCreationTimestamp="2025-12-02 14:18:28 +0000 UTC" firstStartedPulling="2025-12-02 14:18:29.592214313 +0000 UTC m=+2065.554391388" lastFinishedPulling="2025-12-02 14:18:31.071402038 +0000 UTC m=+2067.033579113" observedRunningTime="2025-12-02 14:18:33.868868421 +0000 UTC m=+2069.831045506" watchObservedRunningTime="2025-12-02 14:18:33.878099678 +0000 UTC m=+2069.840276753" Dec 02 14:18:35 crc kubenswrapper[4625]: I1202 14:18:35.841141 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4b59" event={"ID":"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd","Type":"ContainerStarted","Data":"129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb"} Dec 02 14:18:36 crc kubenswrapper[4625]: I1202 14:18:36.245743 4625 scope.go:117] "RemoveContainer" containerID="019ed058cf76d2fa5a7eb64b019e033a7a07c402b0949cfe4a52742aa248fd30" Dec 02 14:18:36 crc kubenswrapper[4625]: I1202 14:18:36.278750 4625 scope.go:117] "RemoveContainer" containerID="b179ab67ebb82e13e7582b87e6336ef7e1c0a271dbe138fd902e3eabb9740a9f" Dec 02 14:18:36 crc kubenswrapper[4625]: I1202 14:18:36.349335 4625 scope.go:117] "RemoveContainer" containerID="3a156d4ab635c2ca274d781e865910d9221c487f02a47308f3ac022ff72e8181" Dec 02 14:18:37 crc kubenswrapper[4625]: I1202 14:18:37.136423 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:37 crc kubenswrapper[4625]: I1202 14:18:37.137045 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:38 crc kubenswrapper[4625]: I1202 14:18:38.254164 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z77h2" podUID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerName="registry-server" probeResult="failure" output=< Dec 02 14:18:38 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 14:18:38 crc kubenswrapper[4625]: > Dec 02 14:18:40 crc kubenswrapper[4625]: I1202 14:18:40.899876 4625 generic.go:334] "Generic (PLEG): container finished" podID="4b2123a9-3349-49ed-a533-b0550d7babc0" containerID="4dd41ca1540734f665a7f65d56fe0df88dd7ff9cc42cc5cf3ea04f6d7c85b3fa" exitCode=0 Dec 02 14:18:40 crc kubenswrapper[4625]: I1202 14:18:40.899958 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" event={"ID":"4b2123a9-3349-49ed-a533-b0550d7babc0","Type":"ContainerDied","Data":"4dd41ca1540734f665a7f65d56fe0df88dd7ff9cc42cc5cf3ea04f6d7c85b3fa"} Dec 02 14:18:40 crc kubenswrapper[4625]: I1202 14:18:40.902793 4625 generic.go:334] "Generic (PLEG): container finished" podID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerID="129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb" exitCode=0 Dec 02 14:18:40 crc kubenswrapper[4625]: I1202 14:18:40.902843 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4b59" event={"ID":"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd","Type":"ContainerDied","Data":"129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb"} Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.597466 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.611733 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-inventory\") pod \"4b2123a9-3349-49ed-a533-b0550d7babc0\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.611816 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpkjw\" (UniqueName: \"kubernetes.io/projected/4b2123a9-3349-49ed-a533-b0550d7babc0-kube-api-access-tpkjw\") pod \"4b2123a9-3349-49ed-a533-b0550d7babc0\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.612106 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-ssh-key\") pod \"4b2123a9-3349-49ed-a533-b0550d7babc0\" (UID: \"4b2123a9-3349-49ed-a533-b0550d7babc0\") " Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.630583 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b2123a9-3349-49ed-a533-b0550d7babc0-kube-api-access-tpkjw" (OuterVolumeSpecName: "kube-api-access-tpkjw") pod "4b2123a9-3349-49ed-a533-b0550d7babc0" (UID: "4b2123a9-3349-49ed-a533-b0550d7babc0"). InnerVolumeSpecName "kube-api-access-tpkjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.660185 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4b2123a9-3349-49ed-a533-b0550d7babc0" (UID: "4b2123a9-3349-49ed-a533-b0550d7babc0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.686852 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-inventory" (OuterVolumeSpecName: "inventory") pod "4b2123a9-3349-49ed-a533-b0550d7babc0" (UID: "4b2123a9-3349-49ed-a533-b0550d7babc0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.717471 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.717503 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpkjw\" (UniqueName: \"kubernetes.io/projected/4b2123a9-3349-49ed-a533-b0550d7babc0-kube-api-access-tpkjw\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.717538 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b2123a9-3349-49ed-a533-b0550d7babc0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.932365 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" event={"ID":"4b2123a9-3349-49ed-a533-b0550d7babc0","Type":"ContainerDied","Data":"13e6ee4fe4d0b66d326bb4116eee57abfbc965baef45f4060e09e539d513f11e"} Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.932424 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13e6ee4fe4d0b66d326bb4116eee57abfbc965baef45f4060e09e539d513f11e" Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.932454 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm" Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.945886 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4b59" event={"ID":"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd","Type":"ContainerStarted","Data":"f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d"} Dec 02 14:18:42 crc kubenswrapper[4625]: I1202 14:18:42.980436 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l4b59" podStartSLOduration=5.753625201 podStartE2EDuration="13.980411681s" podCreationTimestamp="2025-12-02 14:18:29 +0000 UTC" firstStartedPulling="2025-12-02 14:18:33.815768347 +0000 UTC m=+2069.777945422" lastFinishedPulling="2025-12-02 14:18:42.042554817 +0000 UTC m=+2078.004731902" observedRunningTime="2025-12-02 14:18:42.975002717 +0000 UTC m=+2078.937179792" watchObservedRunningTime="2025-12-02 14:18:42.980411681 +0000 UTC m=+2078.942588756" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.033935 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p"] Dec 02 14:18:43 crc kubenswrapper[4625]: E1202 14:18:43.034659 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2123a9-3349-49ed-a533-b0550d7babc0" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.034690 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2123a9-3349-49ed-a533-b0550d7babc0" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.035026 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b2123a9-3349-49ed-a533-b0550d7babc0" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.036078 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.041677 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.043480 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.043701 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.043903 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.044966 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p"] Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.126720 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ttw4p\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.127117 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn8cd\" (UniqueName: \"kubernetes.io/projected/6b962e31-3a28-4083-af18-2c1b6f53b3b3-kube-api-access-sn8cd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ttw4p\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.127157 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ttw4p\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.229980 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ttw4p\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.230070 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn8cd\" (UniqueName: \"kubernetes.io/projected/6b962e31-3a28-4083-af18-2c1b6f53b3b3-kube-api-access-sn8cd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ttw4p\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.230129 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ttw4p\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.238014 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ttw4p\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.247243 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ttw4p\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.261011 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn8cd\" (UniqueName: \"kubernetes.io/projected/6b962e31-3a28-4083-af18-2c1b6f53b3b3-kube-api-access-sn8cd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ttw4p\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: I1202 14:18:43.390485 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:18:43 crc kubenswrapper[4625]: W1202 14:18:43.996972 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b962e31_3a28_4083_af18_2c1b6f53b3b3.slice/crio-40a4d3cc9843051e6e478c748dccf462874815b4b826be874d14adbffa761daa WatchSource:0}: Error finding container 40a4d3cc9843051e6e478c748dccf462874815b4b826be874d14adbffa761daa: Status 404 returned error can't find the container with id 40a4d3cc9843051e6e478c748dccf462874815b4b826be874d14adbffa761daa Dec 02 14:18:44 crc kubenswrapper[4625]: I1202 14:18:44.000447 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p"] Dec 02 14:18:44 crc kubenswrapper[4625]: I1202 14:18:44.969262 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" event={"ID":"6b962e31-3a28-4083-af18-2c1b6f53b3b3","Type":"ContainerStarted","Data":"40a4d3cc9843051e6e478c748dccf462874815b4b826be874d14adbffa761daa"} Dec 02 14:18:46 crc kubenswrapper[4625]: I1202 14:18:46.990050 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" event={"ID":"6b962e31-3a28-4083-af18-2c1b6f53b3b3","Type":"ContainerStarted","Data":"d9513e15331eb3c5a878410df1514f6fcc4a941e46f62b83fd477a8aab730074"} Dec 02 14:18:47 crc kubenswrapper[4625]: I1202 14:18:47.200553 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:47 crc kubenswrapper[4625]: I1202 14:18:47.234879 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" podStartSLOduration=2.373002951 podStartE2EDuration="4.234854999s" podCreationTimestamp="2025-12-02 14:18:43 +0000 UTC" firstStartedPulling="2025-12-02 14:18:44.000592409 +0000 UTC m=+2079.962769514" lastFinishedPulling="2025-12-02 14:18:45.862444487 +0000 UTC m=+2081.824621562" observedRunningTime="2025-12-02 14:18:47.012085633 +0000 UTC m=+2082.974262718" watchObservedRunningTime="2025-12-02 14:18:47.234854999 +0000 UTC m=+2083.197032074" Dec 02 14:18:47 crc kubenswrapper[4625]: I1202 14:18:47.262299 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:47 crc kubenswrapper[4625]: I1202 14:18:47.446402 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z77h2"] Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.018082 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z77h2" podUID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerName="registry-server" containerID="cri-o://c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7" gracePeriod=2 Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.517474 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.703149 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-catalog-content\") pod \"7caa849c-f574-482f-bf69-320ed7e40cfc\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.703254 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-utilities\") pod \"7caa849c-f574-482f-bf69-320ed7e40cfc\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.703347 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc6ph\" (UniqueName: \"kubernetes.io/projected/7caa849c-f574-482f-bf69-320ed7e40cfc-kube-api-access-nc6ph\") pod \"7caa849c-f574-482f-bf69-320ed7e40cfc\" (UID: \"7caa849c-f574-482f-bf69-320ed7e40cfc\") " Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.704253 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-utilities" (OuterVolumeSpecName: "utilities") pod "7caa849c-f574-482f-bf69-320ed7e40cfc" (UID: "7caa849c-f574-482f-bf69-320ed7e40cfc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.725651 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7caa849c-f574-482f-bf69-320ed7e40cfc-kube-api-access-nc6ph" (OuterVolumeSpecName: "kube-api-access-nc6ph") pod "7caa849c-f574-482f-bf69-320ed7e40cfc" (UID: "7caa849c-f574-482f-bf69-320ed7e40cfc"). InnerVolumeSpecName "kube-api-access-nc6ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.778155 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7caa849c-f574-482f-bf69-320ed7e40cfc" (UID: "7caa849c-f574-482f-bf69-320ed7e40cfc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.806689 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.807033 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caa849c-f574-482f-bf69-320ed7e40cfc-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:49 crc kubenswrapper[4625]: I1202 14:18:49.807133 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc6ph\" (UniqueName: \"kubernetes.io/projected/7caa849c-f574-482f-bf69-320ed7e40cfc-kube-api-access-nc6ph\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.050620 4625 generic.go:334] "Generic (PLEG): container finished" podID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerID="c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7" exitCode=0 Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.050695 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z77h2" event={"ID":"7caa849c-f574-482f-bf69-320ed7e40cfc","Type":"ContainerDied","Data":"c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7"} Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.050736 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z77h2" event={"ID":"7caa849c-f574-482f-bf69-320ed7e40cfc","Type":"ContainerDied","Data":"3a7a437a52cb987a40481e3a38dcf043eea690fc80e047373d04d3c9e3aeff3d"} Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.050759 4625 scope.go:117] "RemoveContainer" containerID="c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.050984 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z77h2" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.089229 4625 scope.go:117] "RemoveContainer" containerID="c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.105250 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z77h2"] Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.128681 4625 scope.go:117] "RemoveContainer" containerID="70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.130802 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z77h2"] Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.176539 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.176592 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.178473 4625 scope.go:117] "RemoveContainer" containerID="c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7" Dec 02 14:18:50 crc kubenswrapper[4625]: E1202 14:18:50.179618 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7\": container with ID starting with c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7 not found: ID does not exist" containerID="c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.179657 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7"} err="failed to get container status \"c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7\": rpc error: code = NotFound desc = could not find container \"c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7\": container with ID starting with c2cf6a93dbada792fcd2a390cfe30165e38e17f5944cde21d548d20e954ff6b7 not found: ID does not exist" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.179687 4625 scope.go:117] "RemoveContainer" containerID="c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d" Dec 02 14:18:50 crc kubenswrapper[4625]: E1202 14:18:50.180109 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d\": container with ID starting with c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d not found: ID does not exist" containerID="c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.180171 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d"} err="failed to get container status \"c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d\": rpc error: code = NotFound desc = could not find container \"c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d\": container with ID starting with c91f593a66c870bb5ca1e3b2c228a925f035706c0591b1cf8115346aca13e92d not found: ID does not exist" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.180209 4625 scope.go:117] "RemoveContainer" containerID="70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44" Dec 02 14:18:50 crc kubenswrapper[4625]: E1202 14:18:50.180986 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44\": container with ID starting with 70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44 not found: ID does not exist" containerID="70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.181172 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44"} err="failed to get container status \"70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44\": rpc error: code = NotFound desc = could not find container \"70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44\": container with ID starting with 70c2dced159957aae61e2da87cdcb9a6b19ca987ce69e87cf352c8c58ed78c44 not found: ID does not exist" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.255247 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:50 crc kubenswrapper[4625]: I1202 14:18:50.871692 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7caa849c-f574-482f-bf69-320ed7e40cfc" path="/var/lib/kubelet/pods/7caa849c-f574-482f-bf69-320ed7e40cfc/volumes" Dec 02 14:18:51 crc kubenswrapper[4625]: I1202 14:18:51.117418 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:52 crc kubenswrapper[4625]: I1202 14:18:52.644845 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4b59"] Dec 02 14:18:53 crc kubenswrapper[4625]: I1202 14:18:53.083089 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l4b59" podUID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerName="registry-server" containerID="cri-o://f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d" gracePeriod=2 Dec 02 14:18:53 crc kubenswrapper[4625]: I1202 14:18:53.716269 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:53 crc kubenswrapper[4625]: I1202 14:18:53.809244 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt8n8\" (UniqueName: \"kubernetes.io/projected/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-kube-api-access-wt8n8\") pod \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " Dec 02 14:18:53 crc kubenswrapper[4625]: I1202 14:18:53.809369 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-catalog-content\") pod \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " Dec 02 14:18:53 crc kubenswrapper[4625]: I1202 14:18:53.809449 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-utilities\") pod \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\" (UID: \"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd\") " Dec 02 14:18:53 crc kubenswrapper[4625]: I1202 14:18:53.810584 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-utilities" (OuterVolumeSpecName: "utilities") pod "a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" (UID: "a4815cb8-3b3a-4c05-ba62-223f7f03d1cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:18:53 crc kubenswrapper[4625]: I1202 14:18:53.818744 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-kube-api-access-wt8n8" (OuterVolumeSpecName: "kube-api-access-wt8n8") pod "a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" (UID: "a4815cb8-3b3a-4c05-ba62-223f7f03d1cd"). InnerVolumeSpecName "kube-api-access-wt8n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:18:53 crc kubenswrapper[4625]: I1202 14:18:53.913014 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:53 crc kubenswrapper[4625]: I1202 14:18:53.913065 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt8n8\" (UniqueName: \"kubernetes.io/projected/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-kube-api-access-wt8n8\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:53 crc kubenswrapper[4625]: I1202 14:18:53.950955 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" (UID: "a4815cb8-3b3a-4c05-ba62-223f7f03d1cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.015654 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.093690 4625 generic.go:334] "Generic (PLEG): container finished" podID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerID="f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d" exitCode=0 Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.093751 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4b59" event={"ID":"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd","Type":"ContainerDied","Data":"f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d"} Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.093793 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4b59" event={"ID":"a4815cb8-3b3a-4c05-ba62-223f7f03d1cd","Type":"ContainerDied","Data":"2f7a163bee8056d5d2ad9c4b103b274997e70c2a39c5f8a091a5d9f8c187eee3"} Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.093881 4625 scope.go:117] "RemoveContainer" containerID="f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.093901 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4b59" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.121385 4625 scope.go:117] "RemoveContainer" containerID="129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.140652 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4b59"] Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.150779 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l4b59"] Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.174272 4625 scope.go:117] "RemoveContainer" containerID="6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.206252 4625 scope.go:117] "RemoveContainer" containerID="f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d" Dec 02 14:18:54 crc kubenswrapper[4625]: E1202 14:18:54.206986 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d\": container with ID starting with f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d not found: ID does not exist" containerID="f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.207064 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d"} err="failed to get container status \"f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d\": rpc error: code = NotFound desc = could not find container \"f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d\": container with ID starting with f5d19f10263b61e3f5b8f4a31fd7bcd57a0b1e97e1b19ddd946deef26eaac49d not found: ID does not exist" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.207192 4625 scope.go:117] "RemoveContainer" containerID="129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb" Dec 02 14:18:54 crc kubenswrapper[4625]: E1202 14:18:54.207999 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb\": container with ID starting with 129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb not found: ID does not exist" containerID="129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.208034 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb"} err="failed to get container status \"129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb\": rpc error: code = NotFound desc = could not find container \"129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb\": container with ID starting with 129492209d039b5584dd95166fdc0a4d8a0a47c71f8359875df10ab29d3edbeb not found: ID does not exist" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.208076 4625 scope.go:117] "RemoveContainer" containerID="6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e" Dec 02 14:18:54 crc kubenswrapper[4625]: E1202 14:18:54.208546 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e\": container with ID starting with 6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e not found: ID does not exist" containerID="6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.208577 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e"} err="failed to get container status \"6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e\": rpc error: code = NotFound desc = could not find container \"6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e\": container with ID starting with 6944d5b60790ed2d0efbf2bdc66bd748d2d9337863841f81d8b4fdd1b7982c8e not found: ID does not exist" Dec 02 14:18:54 crc kubenswrapper[4625]: I1202 14:18:54.893040 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" path="/var/lib/kubelet/pods/a4815cb8-3b3a-4c05-ba62-223f7f03d1cd/volumes" Dec 02 14:19:00 crc kubenswrapper[4625]: I1202 14:19:00.150000 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-s9zxp"] Dec 02 14:19:00 crc kubenswrapper[4625]: I1202 14:19:00.159513 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-s9zxp"] Dec 02 14:19:00 crc kubenswrapper[4625]: I1202 14:19:00.868044 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc01764-a48e-4ca9-b717-f79df542b76e" path="/var/lib/kubelet/pods/6bc01764-a48e-4ca9-b717-f79df542b76e/volumes" Dec 02 14:19:36 crc kubenswrapper[4625]: I1202 14:19:36.735334 4625 scope.go:117] "RemoveContainer" containerID="083279c60aad88e82f0d585d1e3599eb65265da2e14cba5840fda19e99a18e9f" Dec 02 14:19:37 crc kubenswrapper[4625]: I1202 14:19:37.766464 4625 generic.go:334] "Generic (PLEG): container finished" podID="6b962e31-3a28-4083-af18-2c1b6f53b3b3" containerID="d9513e15331eb3c5a878410df1514f6fcc4a941e46f62b83fd477a8aab730074" exitCode=0 Dec 02 14:19:37 crc kubenswrapper[4625]: I1202 14:19:37.766563 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" event={"ID":"6b962e31-3a28-4083-af18-2c1b6f53b3b3","Type":"ContainerDied","Data":"d9513e15331eb3c5a878410df1514f6fcc4a941e46f62b83fd477a8aab730074"} Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.272993 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.286334 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-inventory\") pod \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.286539 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn8cd\" (UniqueName: \"kubernetes.io/projected/6b962e31-3a28-4083-af18-2c1b6f53b3b3-kube-api-access-sn8cd\") pod \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.286907 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-ssh-key\") pod \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\" (UID: \"6b962e31-3a28-4083-af18-2c1b6f53b3b3\") " Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.346261 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b962e31-3a28-4083-af18-2c1b6f53b3b3-kube-api-access-sn8cd" (OuterVolumeSpecName: "kube-api-access-sn8cd") pod "6b962e31-3a28-4083-af18-2c1b6f53b3b3" (UID: "6b962e31-3a28-4083-af18-2c1b6f53b3b3"). InnerVolumeSpecName "kube-api-access-sn8cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.387854 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6b962e31-3a28-4083-af18-2c1b6f53b3b3" (UID: "6b962e31-3a28-4083-af18-2c1b6f53b3b3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.390000 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-inventory" (OuterVolumeSpecName: "inventory") pod "6b962e31-3a28-4083-af18-2c1b6f53b3b3" (UID: "6b962e31-3a28-4083-af18-2c1b6f53b3b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.390199 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn8cd\" (UniqueName: \"kubernetes.io/projected/6b962e31-3a28-4083-af18-2c1b6f53b3b3-kube-api-access-sn8cd\") on node \"crc\" DevicePath \"\"" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.390224 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.390237 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b962e31-3a28-4083-af18-2c1b6f53b3b3-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.794468 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" event={"ID":"6b962e31-3a28-4083-af18-2c1b6f53b3b3","Type":"ContainerDied","Data":"40a4d3cc9843051e6e478c748dccf462874815b4b826be874d14adbffa761daa"} Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.794528 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a4d3cc9843051e6e478c748dccf462874815b4b826be874d14adbffa761daa" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.794654 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ttw4p" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.943800 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm"] Dec 02 14:19:39 crc kubenswrapper[4625]: E1202 14:19:39.944521 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b962e31-3a28-4083-af18-2c1b6f53b3b3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.944545 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b962e31-3a28-4083-af18-2c1b6f53b3b3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:19:39 crc kubenswrapper[4625]: E1202 14:19:39.944566 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerName="registry-server" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.944573 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerName="registry-server" Dec 02 14:19:39 crc kubenswrapper[4625]: E1202 14:19:39.944591 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerName="extract-content" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.944597 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerName="extract-content" Dec 02 14:19:39 crc kubenswrapper[4625]: E1202 14:19:39.944603 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerName="extract-content" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.944609 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerName="extract-content" Dec 02 14:19:39 crc kubenswrapper[4625]: E1202 14:19:39.944632 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerName="extract-utilities" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.944639 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerName="extract-utilities" Dec 02 14:19:39 crc kubenswrapper[4625]: E1202 14:19:39.944646 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerName="registry-server" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.944652 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerName="registry-server" Dec 02 14:19:39 crc kubenswrapper[4625]: E1202 14:19:39.944676 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerName="extract-utilities" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.944683 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerName="extract-utilities" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.944921 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b962e31-3a28-4083-af18-2c1b6f53b3b3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.944946 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4815cb8-3b3a-4c05-ba62-223f7f03d1cd" containerName="registry-server" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.944962 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="7caa849c-f574-482f-bf69-320ed7e40cfc" containerName="registry-server" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.945891 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.952732 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.952942 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.953889 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.954399 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:19:39 crc kubenswrapper[4625]: I1202 14:19:39.970490 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm"] Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.006405 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.006474 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pchc\" (UniqueName: \"kubernetes.io/projected/acb5fca1-1ef0-4678-8323-5a42790a0998-kube-api-access-2pchc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.006516 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.108801 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.108894 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pchc\" (UniqueName: \"kubernetes.io/projected/acb5fca1-1ef0-4678-8323-5a42790a0998-kube-api-access-2pchc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.108946 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.114157 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.117223 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.129826 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pchc\" (UniqueName: \"kubernetes.io/projected/acb5fca1-1ef0-4678-8323-5a42790a0998-kube-api-access-2pchc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.275634 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:19:40 crc kubenswrapper[4625]: I1202 14:19:40.952393 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm"] Dec 02 14:19:41 crc kubenswrapper[4625]: I1202 14:19:41.815210 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" event={"ID":"acb5fca1-1ef0-4678-8323-5a42790a0998","Type":"ContainerStarted","Data":"151a558ce1fecddfbc618c19abe1d77530efb8a01c14af30cb6875db597afe85"} Dec 02 14:19:41 crc kubenswrapper[4625]: I1202 14:19:41.815665 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" event={"ID":"acb5fca1-1ef0-4678-8323-5a42790a0998","Type":"ContainerStarted","Data":"42146edea5fa31afbe7172abb2d890a92d782cc659180c269b7cd3ffa34a99e7"} Dec 02 14:19:49 crc kubenswrapper[4625]: I1202 14:19:49.271137 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:19:49 crc kubenswrapper[4625]: I1202 14:19:49.271847 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:20:19 crc kubenswrapper[4625]: I1202 14:20:19.271744 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:20:19 crc kubenswrapper[4625]: I1202 14:20:19.272296 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.629326 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" podStartSLOduration=61.066084332 podStartE2EDuration="1m1.629281145s" podCreationTimestamp="2025-12-02 14:19:39 +0000 UTC" firstStartedPulling="2025-12-02 14:19:40.966988547 +0000 UTC m=+2136.929165622" lastFinishedPulling="2025-12-02 14:19:41.53018535 +0000 UTC m=+2137.492362435" observedRunningTime="2025-12-02 14:19:41.853217296 +0000 UTC m=+2137.815394371" watchObservedRunningTime="2025-12-02 14:20:40.629281145 +0000 UTC m=+2196.591458220" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.631539 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-grrwb"] Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.635148 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.645636 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grrwb"] Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.741405 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-catalog-content\") pod \"redhat-marketplace-grrwb\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.741519 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7q9j\" (UniqueName: \"kubernetes.io/projected/9eb76be6-0575-4d41-a830-587195ec0bf6-kube-api-access-j7q9j\") pod \"redhat-marketplace-grrwb\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.741563 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-utilities\") pod \"redhat-marketplace-grrwb\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.844276 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7q9j\" (UniqueName: \"kubernetes.io/projected/9eb76be6-0575-4d41-a830-587195ec0bf6-kube-api-access-j7q9j\") pod \"redhat-marketplace-grrwb\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.844370 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-utilities\") pod \"redhat-marketplace-grrwb\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.844515 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-catalog-content\") pod \"redhat-marketplace-grrwb\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.845060 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-catalog-content\") pod \"redhat-marketplace-grrwb\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.845110 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-utilities\") pod \"redhat-marketplace-grrwb\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.874585 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7q9j\" (UniqueName: \"kubernetes.io/projected/9eb76be6-0575-4d41-a830-587195ec0bf6-kube-api-access-j7q9j\") pod \"redhat-marketplace-grrwb\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:40 crc kubenswrapper[4625]: I1202 14:20:40.963333 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:41 crc kubenswrapper[4625]: I1202 14:20:41.590696 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grrwb"] Dec 02 14:20:42 crc kubenswrapper[4625]: I1202 14:20:42.464802 4625 generic.go:334] "Generic (PLEG): container finished" podID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerID="2167b640fd961744ddb7daa8fd85ce7467b1dad7930bf4e801caff46daf1cd65" exitCode=0 Dec 02 14:20:42 crc kubenswrapper[4625]: I1202 14:20:42.464960 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grrwb" event={"ID":"9eb76be6-0575-4d41-a830-587195ec0bf6","Type":"ContainerDied","Data":"2167b640fd961744ddb7daa8fd85ce7467b1dad7930bf4e801caff46daf1cd65"} Dec 02 14:20:42 crc kubenswrapper[4625]: I1202 14:20:42.465203 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grrwb" event={"ID":"9eb76be6-0575-4d41-a830-587195ec0bf6","Type":"ContainerStarted","Data":"e17cd629e8da541e100a0cbe2ac43feb73cb2bcb1d08a81b0e13f662fe6f2055"} Dec 02 14:20:42 crc kubenswrapper[4625]: I1202 14:20:42.469395 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:20:44 crc kubenswrapper[4625]: I1202 14:20:44.486225 4625 generic.go:334] "Generic (PLEG): container finished" podID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerID="aeb52d03ebe3daae739c907468fe39f4eba50dd2843cbabc9e26ba85bfabf06c" exitCode=0 Dec 02 14:20:44 crc kubenswrapper[4625]: I1202 14:20:44.486348 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grrwb" event={"ID":"9eb76be6-0575-4d41-a830-587195ec0bf6","Type":"ContainerDied","Data":"aeb52d03ebe3daae739c907468fe39f4eba50dd2843cbabc9e26ba85bfabf06c"} Dec 02 14:20:45 crc kubenswrapper[4625]: I1202 14:20:45.499878 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grrwb" event={"ID":"9eb76be6-0575-4d41-a830-587195ec0bf6","Type":"ContainerStarted","Data":"5e9c86672af5db61f49d2fa36c6b5ffc7b061dca2a1f8e6e6cdd506de11089a9"} Dec 02 14:20:45 crc kubenswrapper[4625]: I1202 14:20:45.527895 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-grrwb" podStartSLOduration=2.804732089 podStartE2EDuration="5.527868713s" podCreationTimestamp="2025-12-02 14:20:40 +0000 UTC" firstStartedPulling="2025-12-02 14:20:42.468962914 +0000 UTC m=+2198.431139989" lastFinishedPulling="2025-12-02 14:20:45.192099528 +0000 UTC m=+2201.154276613" observedRunningTime="2025-12-02 14:20:45.522460569 +0000 UTC m=+2201.484637664" watchObservedRunningTime="2025-12-02 14:20:45.527868713 +0000 UTC m=+2201.490045788" Dec 02 14:20:46 crc kubenswrapper[4625]: I1202 14:20:46.514197 4625 generic.go:334] "Generic (PLEG): container finished" podID="acb5fca1-1ef0-4678-8323-5a42790a0998" containerID="151a558ce1fecddfbc618c19abe1d77530efb8a01c14af30cb6875db597afe85" exitCode=0 Dec 02 14:20:46 crc kubenswrapper[4625]: I1202 14:20:46.514404 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" event={"ID":"acb5fca1-1ef0-4678-8323-5a42790a0998","Type":"ContainerDied","Data":"151a558ce1fecddfbc618c19abe1d77530efb8a01c14af30cb6875db597afe85"} Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.146057 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.242042 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-inventory\") pod \"acb5fca1-1ef0-4678-8323-5a42790a0998\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.242187 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-ssh-key\") pod \"acb5fca1-1ef0-4678-8323-5a42790a0998\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.242324 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pchc\" (UniqueName: \"kubernetes.io/projected/acb5fca1-1ef0-4678-8323-5a42790a0998-kube-api-access-2pchc\") pod \"acb5fca1-1ef0-4678-8323-5a42790a0998\" (UID: \"acb5fca1-1ef0-4678-8323-5a42790a0998\") " Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.265444 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb5fca1-1ef0-4678-8323-5a42790a0998-kube-api-access-2pchc" (OuterVolumeSpecName: "kube-api-access-2pchc") pod "acb5fca1-1ef0-4678-8323-5a42790a0998" (UID: "acb5fca1-1ef0-4678-8323-5a42790a0998"). InnerVolumeSpecName "kube-api-access-2pchc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.274979 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "acb5fca1-1ef0-4678-8323-5a42790a0998" (UID: "acb5fca1-1ef0-4678-8323-5a42790a0998"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.311012 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-inventory" (OuterVolumeSpecName: "inventory") pod "acb5fca1-1ef0-4678-8323-5a42790a0998" (UID: "acb5fca1-1ef0-4678-8323-5a42790a0998"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.351041 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.351078 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/acb5fca1-1ef0-4678-8323-5a42790a0998-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.351124 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pchc\" (UniqueName: \"kubernetes.io/projected/acb5fca1-1ef0-4678-8323-5a42790a0998-kube-api-access-2pchc\") on node \"crc\" DevicePath \"\"" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.541091 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" event={"ID":"acb5fca1-1ef0-4678-8323-5a42790a0998","Type":"ContainerDied","Data":"42146edea5fa31afbe7172abb2d890a92d782cc659180c269b7cd3ffa34a99e7"} Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.541152 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42146edea5fa31afbe7172abb2d890a92d782cc659180c269b7cd3ffa34a99e7" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.541169 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.738669 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-76psc"] Dec 02 14:20:48 crc kubenswrapper[4625]: E1202 14:20:48.739205 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb5fca1-1ef0-4678-8323-5a42790a0998" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.739224 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb5fca1-1ef0-4678-8323-5a42790a0998" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.740995 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb5fca1-1ef0-4678-8323-5a42790a0998" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.741818 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.744230 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.744757 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.745542 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.746474 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.766790 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-76psc"] Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.870711 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-76psc\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.870889 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9jf6\" (UniqueName: \"kubernetes.io/projected/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-kube-api-access-m9jf6\") pod \"ssh-known-hosts-edpm-deployment-76psc\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.871036 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-76psc\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.973164 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-76psc\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.973243 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-76psc\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.973368 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9jf6\" (UniqueName: \"kubernetes.io/projected/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-kube-api-access-m9jf6\") pod \"ssh-known-hosts-edpm-deployment-76psc\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.978072 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-76psc\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.978909 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-76psc\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:48 crc kubenswrapper[4625]: I1202 14:20:48.993065 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9jf6\" (UniqueName: \"kubernetes.io/projected/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-kube-api-access-m9jf6\") pod \"ssh-known-hosts-edpm-deployment-76psc\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:49 crc kubenswrapper[4625]: I1202 14:20:49.063888 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:20:49 crc kubenswrapper[4625]: I1202 14:20:49.272462 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:20:49 crc kubenswrapper[4625]: I1202 14:20:49.272855 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:20:49 crc kubenswrapper[4625]: I1202 14:20:49.272910 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:20:49 crc kubenswrapper[4625]: I1202 14:20:49.274219 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3a735a844bd9bd376fbebb84fff6d0aa76d54ffd47aa4fc8c0675440ff0acf9"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:20:49 crc kubenswrapper[4625]: I1202 14:20:49.274289 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://d3a735a844bd9bd376fbebb84fff6d0aa76d54ffd47aa4fc8c0675440ff0acf9" gracePeriod=600 Dec 02 14:20:49 crc kubenswrapper[4625]: I1202 14:20:49.677742 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-76psc"] Dec 02 14:20:50 crc kubenswrapper[4625]: I1202 14:20:50.567103 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="d3a735a844bd9bd376fbebb84fff6d0aa76d54ffd47aa4fc8c0675440ff0acf9" exitCode=0 Dec 02 14:20:50 crc kubenswrapper[4625]: I1202 14:20:50.567175 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"d3a735a844bd9bd376fbebb84fff6d0aa76d54ffd47aa4fc8c0675440ff0acf9"} Dec 02 14:20:50 crc kubenswrapper[4625]: I1202 14:20:50.569016 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee"} Dec 02 14:20:50 crc kubenswrapper[4625]: I1202 14:20:50.569192 4625 scope.go:117] "RemoveContainer" containerID="db39ca33ee78e3693aef41da6314b2f7d75facdb86e118adb7b58fa7ad81dd03" Dec 02 14:20:50 crc kubenswrapper[4625]: I1202 14:20:50.576494 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-76psc" event={"ID":"20d1bcc5-faf7-4265-bdd8-f471a4d449cd","Type":"ContainerStarted","Data":"815f4db5c248d40bf5cfd46c8dcece18631ee24aa090785ce5cafd7338c9a14b"} Dec 02 14:20:50 crc kubenswrapper[4625]: I1202 14:20:50.576564 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-76psc" event={"ID":"20d1bcc5-faf7-4265-bdd8-f471a4d449cd","Type":"ContainerStarted","Data":"d42f30824bba06df6edd56fa293a0e1d961a1caa53f3eb33f70ed69daaf6a910"} Dec 02 14:20:50 crc kubenswrapper[4625]: I1202 14:20:50.639026 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-76psc" podStartSLOduration=2.16521384 podStartE2EDuration="2.638998312s" podCreationTimestamp="2025-12-02 14:20:48 +0000 UTC" firstStartedPulling="2025-12-02 14:20:49.694010168 +0000 UTC m=+2205.656187263" lastFinishedPulling="2025-12-02 14:20:50.16779466 +0000 UTC m=+2206.129971735" observedRunningTime="2025-12-02 14:20:50.63029101 +0000 UTC m=+2206.592468085" watchObservedRunningTime="2025-12-02 14:20:50.638998312 +0000 UTC m=+2206.601175387" Dec 02 14:20:50 crc kubenswrapper[4625]: I1202 14:20:50.964073 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:50 crc kubenswrapper[4625]: I1202 14:20:50.964450 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:51 crc kubenswrapper[4625]: I1202 14:20:51.021012 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:51 crc kubenswrapper[4625]: I1202 14:20:51.675056 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:51 crc kubenswrapper[4625]: I1202 14:20:51.737891 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grrwb"] Dec 02 14:20:53 crc kubenswrapper[4625]: I1202 14:20:53.621141 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-grrwb" podUID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerName="registry-server" containerID="cri-o://5e9c86672af5db61f49d2fa36c6b5ffc7b061dca2a1f8e6e6cdd506de11089a9" gracePeriod=2 Dec 02 14:20:55 crc kubenswrapper[4625]: I1202 14:20:55.648144 4625 generic.go:334] "Generic (PLEG): container finished" podID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerID="5e9c86672af5db61f49d2fa36c6b5ffc7b061dca2a1f8e6e6cdd506de11089a9" exitCode=0 Dec 02 14:20:55 crc kubenswrapper[4625]: I1202 14:20:55.648420 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grrwb" event={"ID":"9eb76be6-0575-4d41-a830-587195ec0bf6","Type":"ContainerDied","Data":"5e9c86672af5db61f49d2fa36c6b5ffc7b061dca2a1f8e6e6cdd506de11089a9"} Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:55.999725 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.081909 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7q9j\" (UniqueName: \"kubernetes.io/projected/9eb76be6-0575-4d41-a830-587195ec0bf6-kube-api-access-j7q9j\") pod \"9eb76be6-0575-4d41-a830-587195ec0bf6\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.082154 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-catalog-content\") pod \"9eb76be6-0575-4d41-a830-587195ec0bf6\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.082194 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-utilities\") pod \"9eb76be6-0575-4d41-a830-587195ec0bf6\" (UID: \"9eb76be6-0575-4d41-a830-587195ec0bf6\") " Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.083273 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-utilities" (OuterVolumeSpecName: "utilities") pod "9eb76be6-0575-4d41-a830-587195ec0bf6" (UID: "9eb76be6-0575-4d41-a830-587195ec0bf6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.093557 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb76be6-0575-4d41-a830-587195ec0bf6-kube-api-access-j7q9j" (OuterVolumeSpecName: "kube-api-access-j7q9j") pod "9eb76be6-0575-4d41-a830-587195ec0bf6" (UID: "9eb76be6-0575-4d41-a830-587195ec0bf6"). InnerVolumeSpecName "kube-api-access-j7q9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.108027 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9eb76be6-0575-4d41-a830-587195ec0bf6" (UID: "9eb76be6-0575-4d41-a830-587195ec0bf6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.185492 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7q9j\" (UniqueName: \"kubernetes.io/projected/9eb76be6-0575-4d41-a830-587195ec0bf6-kube-api-access-j7q9j\") on node \"crc\" DevicePath \"\"" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.185530 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.185539 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eb76be6-0575-4d41-a830-587195ec0bf6-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.679637 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grrwb" event={"ID":"9eb76be6-0575-4d41-a830-587195ec0bf6","Type":"ContainerDied","Data":"e17cd629e8da541e100a0cbe2ac43feb73cb2bcb1d08a81b0e13f662fe6f2055"} Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.679970 4625 scope.go:117] "RemoveContainer" containerID="5e9c86672af5db61f49d2fa36c6b5ffc7b061dca2a1f8e6e6cdd506de11089a9" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.680335 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grrwb" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.712463 4625 scope.go:117] "RemoveContainer" containerID="aeb52d03ebe3daae739c907468fe39f4eba50dd2843cbabc9e26ba85bfabf06c" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.733276 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grrwb"] Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.746217 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-grrwb"] Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.754022 4625 scope.go:117] "RemoveContainer" containerID="2167b640fd961744ddb7daa8fd85ce7467b1dad7930bf4e801caff46daf1cd65" Dec 02 14:20:56 crc kubenswrapper[4625]: I1202 14:20:56.875837 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb76be6-0575-4d41-a830-587195ec0bf6" path="/var/lib/kubelet/pods/9eb76be6-0575-4d41-a830-587195ec0bf6/volumes" Dec 02 14:20:58 crc kubenswrapper[4625]: I1202 14:20:58.717177 4625 generic.go:334] "Generic (PLEG): container finished" podID="20d1bcc5-faf7-4265-bdd8-f471a4d449cd" containerID="815f4db5c248d40bf5cfd46c8dcece18631ee24aa090785ce5cafd7338c9a14b" exitCode=0 Dec 02 14:20:58 crc kubenswrapper[4625]: I1202 14:20:58.717258 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-76psc" event={"ID":"20d1bcc5-faf7-4265-bdd8-f471a4d449cd","Type":"ContainerDied","Data":"815f4db5c248d40bf5cfd46c8dcece18631ee24aa090785ce5cafd7338c9a14b"} Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.264275 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.411143 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-ssh-key-openstack-edpm-ipam\") pod \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.411395 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-inventory-0\") pod \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.411485 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9jf6\" (UniqueName: \"kubernetes.io/projected/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-kube-api-access-m9jf6\") pod \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\" (UID: \"20d1bcc5-faf7-4265-bdd8-f471a4d449cd\") " Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.420668 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-kube-api-access-m9jf6" (OuterVolumeSpecName: "kube-api-access-m9jf6") pod "20d1bcc5-faf7-4265-bdd8-f471a4d449cd" (UID: "20d1bcc5-faf7-4265-bdd8-f471a4d449cd"). InnerVolumeSpecName "kube-api-access-m9jf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.451582 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "20d1bcc5-faf7-4265-bdd8-f471a4d449cd" (UID: "20d1bcc5-faf7-4265-bdd8-f471a4d449cd"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.454571 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "20d1bcc5-faf7-4265-bdd8-f471a4d449cd" (UID: "20d1bcc5-faf7-4265-bdd8-f471a4d449cd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.513908 4625 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.513958 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9jf6\" (UniqueName: \"kubernetes.io/projected/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-kube-api-access-m9jf6\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.513972 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20d1bcc5-faf7-4265-bdd8-f471a4d449cd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.744473 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-76psc" event={"ID":"20d1bcc5-faf7-4265-bdd8-f471a4d449cd","Type":"ContainerDied","Data":"d42f30824bba06df6edd56fa293a0e1d961a1caa53f3eb33f70ed69daaf6a910"} Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.744535 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42f30824bba06df6edd56fa293a0e1d961a1caa53f3eb33f70ed69daaf6a910" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.744568 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-76psc" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.876639 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql"] Dec 02 14:21:00 crc kubenswrapper[4625]: E1202 14:21:00.878021 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerName="extract-utilities" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.878129 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerName="extract-utilities" Dec 02 14:21:00 crc kubenswrapper[4625]: E1202 14:21:00.878233 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerName="extract-content" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.878304 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerName="extract-content" Dec 02 14:21:00 crc kubenswrapper[4625]: E1202 14:21:00.878422 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d1bcc5-faf7-4265-bdd8-f471a4d449cd" containerName="ssh-known-hosts-edpm-deployment" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.878493 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d1bcc5-faf7-4265-bdd8-f471a4d449cd" containerName="ssh-known-hosts-edpm-deployment" Dec 02 14:21:00 crc kubenswrapper[4625]: E1202 14:21:00.878577 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerName="registry-server" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.878695 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerName="registry-server" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.879047 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d1bcc5-faf7-4265-bdd8-f471a4d449cd" containerName="ssh-known-hosts-edpm-deployment" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.879149 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb76be6-0575-4d41-a830-587195ec0bf6" containerName="registry-server" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.880157 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.885729 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql"] Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.887125 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.887199 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.888520 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:21:00 crc kubenswrapper[4625]: I1202 14:21:00.888673 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.030119 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcn5n\" (UniqueName: \"kubernetes.io/projected/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-kube-api-access-vcn5n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8jql\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.030201 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8jql\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.030275 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8jql\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.132071 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8jql\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.132171 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8jql\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.132337 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcn5n\" (UniqueName: \"kubernetes.io/projected/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-kube-api-access-vcn5n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8jql\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.138903 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8jql\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.140781 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8jql\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.162474 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcn5n\" (UniqueName: \"kubernetes.io/projected/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-kube-api-access-vcn5n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8jql\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.217589 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:01 crc kubenswrapper[4625]: I1202 14:21:01.850162 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql"] Dec 02 14:21:02 crc kubenswrapper[4625]: I1202 14:21:02.768578 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" event={"ID":"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6","Type":"ContainerStarted","Data":"b11e6402a659249f6edee1daba7f5e4bfd2e44dffdf8fadac5e1693f9370b744"} Dec 02 14:21:02 crc kubenswrapper[4625]: I1202 14:21:02.769503 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" event={"ID":"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6","Type":"ContainerStarted","Data":"2cafa70da0628413271a98e1da19ca2a91f4b5aa3d970fc131e92db1f79cbd70"} Dec 02 14:21:02 crc kubenswrapper[4625]: I1202 14:21:02.797533 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" podStartSLOduration=2.294587254 podStartE2EDuration="2.797511772s" podCreationTimestamp="2025-12-02 14:21:00 +0000 UTC" firstStartedPulling="2025-12-02 14:21:01.862265557 +0000 UTC m=+2217.824442632" lastFinishedPulling="2025-12-02 14:21:02.365190075 +0000 UTC m=+2218.327367150" observedRunningTime="2025-12-02 14:21:02.793000712 +0000 UTC m=+2218.755177777" watchObservedRunningTime="2025-12-02 14:21:02.797511772 +0000 UTC m=+2218.759688847" Dec 02 14:21:12 crc kubenswrapper[4625]: I1202 14:21:12.894710 4625 generic.go:334] "Generic (PLEG): container finished" podID="bfdfc7da-d385-4c7c-8e45-fd36703b7fb6" containerID="b11e6402a659249f6edee1daba7f5e4bfd2e44dffdf8fadac5e1693f9370b744" exitCode=0 Dec 02 14:21:12 crc kubenswrapper[4625]: I1202 14:21:12.894843 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" event={"ID":"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6","Type":"ContainerDied","Data":"b11e6402a659249f6edee1daba7f5e4bfd2e44dffdf8fadac5e1693f9370b744"} Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.434210 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.471925 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-ssh-key\") pod \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.471991 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-inventory\") pod \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.472196 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcn5n\" (UniqueName: \"kubernetes.io/projected/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-kube-api-access-vcn5n\") pod \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\" (UID: \"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6\") " Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.502233 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-kube-api-access-vcn5n" (OuterVolumeSpecName: "kube-api-access-vcn5n") pod "bfdfc7da-d385-4c7c-8e45-fd36703b7fb6" (UID: "bfdfc7da-d385-4c7c-8e45-fd36703b7fb6"). InnerVolumeSpecName "kube-api-access-vcn5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.508770 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bfdfc7da-d385-4c7c-8e45-fd36703b7fb6" (UID: "bfdfc7da-d385-4c7c-8e45-fd36703b7fb6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.523224 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-inventory" (OuterVolumeSpecName: "inventory") pod "bfdfc7da-d385-4c7c-8e45-fd36703b7fb6" (UID: "bfdfc7da-d385-4c7c-8e45-fd36703b7fb6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.576297 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcn5n\" (UniqueName: \"kubernetes.io/projected/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-kube-api-access-vcn5n\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.576479 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.576493 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfdfc7da-d385-4c7c-8e45-fd36703b7fb6-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.920748 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" event={"ID":"bfdfc7da-d385-4c7c-8e45-fd36703b7fb6","Type":"ContainerDied","Data":"2cafa70da0628413271a98e1da19ca2a91f4b5aa3d970fc131e92db1f79cbd70"} Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.920827 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cafa70da0628413271a98e1da19ca2a91f4b5aa3d970fc131e92db1f79cbd70" Dec 02 14:21:14 crc kubenswrapper[4625]: I1202 14:21:14.920830 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8jql" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.035589 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4"] Dec 02 14:21:15 crc kubenswrapper[4625]: E1202 14:21:15.036457 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfdfc7da-d385-4c7c-8e45-fd36703b7fb6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.036480 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdfc7da-d385-4c7c-8e45-fd36703b7fb6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.036713 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfdfc7da-d385-4c7c-8e45-fd36703b7fb6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.037653 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.041040 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.041225 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.043889 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.044245 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.053035 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4"] Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.088507 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.088764 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzw4z\" (UniqueName: \"kubernetes.io/projected/abdce099-8a70-4557-860e-379c32fd5d6c-kube-api-access-xzw4z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.088952 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.191480 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.191569 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzw4z\" (UniqueName: \"kubernetes.io/projected/abdce099-8a70-4557-860e-379c32fd5d6c-kube-api-access-xzw4z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.191651 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.203016 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.216294 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.218285 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzw4z\" (UniqueName: \"kubernetes.io/projected/abdce099-8a70-4557-860e-379c32fd5d6c-kube-api-access-xzw4z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.403748 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:15 crc kubenswrapper[4625]: I1202 14:21:15.996898 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4"] Dec 02 14:21:16 crc kubenswrapper[4625]: I1202 14:21:16.949027 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" event={"ID":"abdce099-8a70-4557-860e-379c32fd5d6c","Type":"ContainerStarted","Data":"96d424853a122c689e17894686abc58395d1f523cbbf4ea1315e4fab83fa2799"} Dec 02 14:21:17 crc kubenswrapper[4625]: I1202 14:21:17.961356 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" event={"ID":"abdce099-8a70-4557-860e-379c32fd5d6c","Type":"ContainerStarted","Data":"1aa6a017205e40b89cdbb9ee9ea37c116f2e7dba798e4b65a0ea40f922c559f2"} Dec 02 14:21:17 crc kubenswrapper[4625]: I1202 14:21:17.984822 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" podStartSLOduration=2.348406014 podStartE2EDuration="2.984798228s" podCreationTimestamp="2025-12-02 14:21:15 +0000 UTC" firstStartedPulling="2025-12-02 14:21:16.014037306 +0000 UTC m=+2231.976214381" lastFinishedPulling="2025-12-02 14:21:16.65042948 +0000 UTC m=+2232.612606595" observedRunningTime="2025-12-02 14:21:17.981805678 +0000 UTC m=+2233.943982753" watchObservedRunningTime="2025-12-02 14:21:17.984798228 +0000 UTC m=+2233.946975303" Dec 02 14:21:29 crc kubenswrapper[4625]: I1202 14:21:29.079554 4625 generic.go:334] "Generic (PLEG): container finished" podID="abdce099-8a70-4557-860e-379c32fd5d6c" containerID="1aa6a017205e40b89cdbb9ee9ea37c116f2e7dba798e4b65a0ea40f922c559f2" exitCode=0 Dec 02 14:21:29 crc kubenswrapper[4625]: I1202 14:21:29.079695 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" event={"ID":"abdce099-8a70-4557-860e-379c32fd5d6c","Type":"ContainerDied","Data":"1aa6a017205e40b89cdbb9ee9ea37c116f2e7dba798e4b65a0ea40f922c559f2"} Dec 02 14:21:30 crc kubenswrapper[4625]: I1202 14:21:30.638968 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:30 crc kubenswrapper[4625]: I1202 14:21:30.776494 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzw4z\" (UniqueName: \"kubernetes.io/projected/abdce099-8a70-4557-860e-379c32fd5d6c-kube-api-access-xzw4z\") pod \"abdce099-8a70-4557-860e-379c32fd5d6c\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " Dec 02 14:21:30 crc kubenswrapper[4625]: I1202 14:21:30.776645 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-ssh-key\") pod \"abdce099-8a70-4557-860e-379c32fd5d6c\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " Dec 02 14:21:30 crc kubenswrapper[4625]: I1202 14:21:30.776814 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-inventory\") pod \"abdce099-8a70-4557-860e-379c32fd5d6c\" (UID: \"abdce099-8a70-4557-860e-379c32fd5d6c\") " Dec 02 14:21:30 crc kubenswrapper[4625]: I1202 14:21:30.798139 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abdce099-8a70-4557-860e-379c32fd5d6c-kube-api-access-xzw4z" (OuterVolumeSpecName: "kube-api-access-xzw4z") pod "abdce099-8a70-4557-860e-379c32fd5d6c" (UID: "abdce099-8a70-4557-860e-379c32fd5d6c"). InnerVolumeSpecName "kube-api-access-xzw4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:21:30 crc kubenswrapper[4625]: I1202 14:21:30.817463 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "abdce099-8a70-4557-860e-379c32fd5d6c" (UID: "abdce099-8a70-4557-860e-379c32fd5d6c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:21:30 crc kubenswrapper[4625]: I1202 14:21:30.827511 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-inventory" (OuterVolumeSpecName: "inventory") pod "abdce099-8a70-4557-860e-379c32fd5d6c" (UID: "abdce099-8a70-4557-860e-379c32fd5d6c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:21:30 crc kubenswrapper[4625]: I1202 14:21:30.880747 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzw4z\" (UniqueName: \"kubernetes.io/projected/abdce099-8a70-4557-860e-379c32fd5d6c-kube-api-access-xzw4z\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:30 crc kubenswrapper[4625]: I1202 14:21:30.880796 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:30 crc kubenswrapper[4625]: I1202 14:21:30.880812 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abdce099-8a70-4557-860e-379c32fd5d6c-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.108675 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" event={"ID":"abdce099-8a70-4557-860e-379c32fd5d6c","Type":"ContainerDied","Data":"96d424853a122c689e17894686abc58395d1f523cbbf4ea1315e4fab83fa2799"} Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.108735 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96d424853a122c689e17894686abc58395d1f523cbbf4ea1315e4fab83fa2799" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.108813 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.332321 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7"] Dec 02 14:21:31 crc kubenswrapper[4625]: E1202 14:21:31.333072 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abdce099-8a70-4557-860e-379c32fd5d6c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.333150 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="abdce099-8a70-4557-860e-379c32fd5d6c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.333439 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="abdce099-8a70-4557-860e-379c32fd5d6c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.334404 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.344649 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.347274 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.347527 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.347750 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.352079 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.352718 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.353065 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.355271 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.380616 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7"] Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.393902 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394113 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394190 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394226 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394350 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394412 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394438 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394470 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394497 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394522 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394735 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394854 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2jjj\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-kube-api-access-d2jjj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.394927 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.395092 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.498994 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499079 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499110 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499140 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499165 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499244 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499279 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2jjj\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-kube-api-access-d2jjj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499302 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499350 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499410 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499456 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499475 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499494 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.499536 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.511531 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.519134 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.519795 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.520578 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.528797 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.533449 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.533919 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.534834 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.545165 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.632960 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.634162 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.634394 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.635549 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2jjj\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-kube-api-access-d2jjj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.640465 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:31 crc kubenswrapper[4625]: I1202 14:21:31.658182 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:21:32 crc kubenswrapper[4625]: I1202 14:21:32.364456 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7"] Dec 02 14:21:33 crc kubenswrapper[4625]: I1202 14:21:33.130225 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" event={"ID":"a7c36e4d-5e3c-4036-abef-01a4eb799665","Type":"ContainerStarted","Data":"2db43c4220b48ceb9515bdb2a7cb1eb43c2982b9367cc144f79ed7216def687a"} Dec 02 14:21:34 crc kubenswrapper[4625]: I1202 14:21:34.146241 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" event={"ID":"a7c36e4d-5e3c-4036-abef-01a4eb799665","Type":"ContainerStarted","Data":"35bd083095998e55018df625d32bd306c90d911026bb037031c159aa4685e4f4"} Dec 02 14:21:34 crc kubenswrapper[4625]: I1202 14:21:34.180428 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" podStartSLOduration=2.6377644890000003 podStartE2EDuration="3.180396094s" podCreationTimestamp="2025-12-02 14:21:31 +0000 UTC" firstStartedPulling="2025-12-02 14:21:32.358469778 +0000 UTC m=+2248.320646853" lastFinishedPulling="2025-12-02 14:21:32.901101383 +0000 UTC m=+2248.863278458" observedRunningTime="2025-12-02 14:21:34.175270647 +0000 UTC m=+2250.137447732" watchObservedRunningTime="2025-12-02 14:21:34.180396094 +0000 UTC m=+2250.142573169" Dec 02 14:22:15 crc kubenswrapper[4625]: I1202 14:22:15.631814 4625 generic.go:334] "Generic (PLEG): container finished" podID="a7c36e4d-5e3c-4036-abef-01a4eb799665" containerID="35bd083095998e55018df625d32bd306c90d911026bb037031c159aa4685e4f4" exitCode=0 Dec 02 14:22:15 crc kubenswrapper[4625]: I1202 14:22:15.631912 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" event={"ID":"a7c36e4d-5e3c-4036-abef-01a4eb799665","Type":"ContainerDied","Data":"35bd083095998e55018df625d32bd306c90d911026bb037031c159aa4685e4f4"} Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.108948 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275176 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-nova-combined-ca-bundle\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275339 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275373 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ovn-combined-ca-bundle\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275417 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275464 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-inventory\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275575 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-libvirt-combined-ca-bundle\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275593 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-repo-setup-combined-ca-bundle\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275624 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275707 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-telemetry-combined-ca-bundle\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275738 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-neutron-metadata-combined-ca-bundle\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275771 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275801 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2jjj\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-kube-api-access-d2jjj\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275893 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-bootstrap-combined-ca-bundle\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.275950 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ssh-key\") pod \"a7c36e4d-5e3c-4036-abef-01a4eb799665\" (UID: \"a7c36e4d-5e3c-4036-abef-01a4eb799665\") " Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.284255 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.284807 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.286068 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.286652 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.290240 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.291671 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.292890 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.292963 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.293594 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.294193 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.295175 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.295806 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-kube-api-access-d2jjj" (OuterVolumeSpecName: "kube-api-access-d2jjj") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "kube-api-access-d2jjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.315473 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-inventory" (OuterVolumeSpecName: "inventory") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.317573 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a7c36e4d-5e3c-4036-abef-01a4eb799665" (UID: "a7c36e4d-5e3c-4036-abef-01a4eb799665"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379245 4625 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379459 4625 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379529 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379548 4625 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379563 4625 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379580 4625 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379593 4625 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379606 4625 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379620 4625 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379634 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2jjj\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-kube-api-access-d2jjj\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379647 4625 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379663 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379677 4625 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c36e4d-5e3c-4036-abef-01a4eb799665-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.379688 4625 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a7c36e4d-5e3c-4036-abef-01a4eb799665-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.657261 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" event={"ID":"a7c36e4d-5e3c-4036-abef-01a4eb799665","Type":"ContainerDied","Data":"2db43c4220b48ceb9515bdb2a7cb1eb43c2982b9367cc144f79ed7216def687a"} Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.657850 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2db43c4220b48ceb9515bdb2a7cb1eb43c2982b9367cc144f79ed7216def687a" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.657519 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.926561 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh"] Dec 02 14:22:17 crc kubenswrapper[4625]: E1202 14:22:17.927113 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c36e4d-5e3c-4036-abef-01a4eb799665" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.927140 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c36e4d-5e3c-4036-abef-01a4eb799665" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.927353 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c36e4d-5e3c-4036-abef-01a4eb799665" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.928211 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.933510 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.933629 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.933706 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.933722 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.933859 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:22:17 crc kubenswrapper[4625]: I1202 14:22:17.955166 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh"] Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.104739 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.104804 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.104863 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.104922 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.104945 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxsmc\" (UniqueName: \"kubernetes.io/projected/9505225f-1412-45d6-96f3-27b3ab5c35c1-kube-api-access-qxsmc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.208428 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.208484 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.208536 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.208567 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.208588 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxsmc\" (UniqueName: \"kubernetes.io/projected/9505225f-1412-45d6-96f3-27b3ab5c35c1-kube-api-access-qxsmc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.211566 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.214592 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.215144 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.219753 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.228386 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxsmc\" (UniqueName: \"kubernetes.io/projected/9505225f-1412-45d6-96f3-27b3ab5c35c1-kube-api-access-qxsmc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s5gxh\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.306527 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:22:18 crc kubenswrapper[4625]: I1202 14:22:18.945413 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh"] Dec 02 14:22:19 crc kubenswrapper[4625]: I1202 14:22:19.686893 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" event={"ID":"9505225f-1412-45d6-96f3-27b3ab5c35c1","Type":"ContainerStarted","Data":"b04f6917e294485f9dffcb389c95ae1d8092a53da62410b849a20591a4e9a549"} Dec 02 14:22:20 crc kubenswrapper[4625]: I1202 14:22:20.701855 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" event={"ID":"9505225f-1412-45d6-96f3-27b3ab5c35c1","Type":"ContainerStarted","Data":"9cd542b4dadaaffbbe73a1973a64e52cf3241195670859319a39d623cd997d7f"} Dec 02 14:22:20 crc kubenswrapper[4625]: I1202 14:22:20.730903 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" podStartSLOduration=3.066396774 podStartE2EDuration="3.730864256s" podCreationTimestamp="2025-12-02 14:22:17 +0000 UTC" firstStartedPulling="2025-12-02 14:22:18.951717519 +0000 UTC m=+2294.913894594" lastFinishedPulling="2025-12-02 14:22:19.616185001 +0000 UTC m=+2295.578362076" observedRunningTime="2025-12-02 14:22:20.729263553 +0000 UTC m=+2296.691440648" watchObservedRunningTime="2025-12-02 14:22:20.730864256 +0000 UTC m=+2296.693041321" Dec 02 14:23:19 crc kubenswrapper[4625]: I1202 14:23:19.272004 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:23:19 crc kubenswrapper[4625]: I1202 14:23:19.275296 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:23:32 crc kubenswrapper[4625]: I1202 14:23:32.577525 4625 generic.go:334] "Generic (PLEG): container finished" podID="9505225f-1412-45d6-96f3-27b3ab5c35c1" containerID="9cd542b4dadaaffbbe73a1973a64e52cf3241195670859319a39d623cd997d7f" exitCode=0 Dec 02 14:23:32 crc kubenswrapper[4625]: I1202 14:23:32.577641 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" event={"ID":"9505225f-1412-45d6-96f3-27b3ab5c35c1","Type":"ContainerDied","Data":"9cd542b4dadaaffbbe73a1973a64e52cf3241195670859319a39d623cd997d7f"} Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.011803 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.123864 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-inventory\") pod \"9505225f-1412-45d6-96f3-27b3ab5c35c1\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.123927 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovncontroller-config-0\") pod \"9505225f-1412-45d6-96f3-27b3ab5c35c1\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.124046 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ssh-key\") pod \"9505225f-1412-45d6-96f3-27b3ab5c35c1\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.124194 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovn-combined-ca-bundle\") pod \"9505225f-1412-45d6-96f3-27b3ab5c35c1\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.124360 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxsmc\" (UniqueName: \"kubernetes.io/projected/9505225f-1412-45d6-96f3-27b3ab5c35c1-kube-api-access-qxsmc\") pod \"9505225f-1412-45d6-96f3-27b3ab5c35c1\" (UID: \"9505225f-1412-45d6-96f3-27b3ab5c35c1\") " Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.131334 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9505225f-1412-45d6-96f3-27b3ab5c35c1" (UID: "9505225f-1412-45d6-96f3-27b3ab5c35c1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.131954 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9505225f-1412-45d6-96f3-27b3ab5c35c1-kube-api-access-qxsmc" (OuterVolumeSpecName: "kube-api-access-qxsmc") pod "9505225f-1412-45d6-96f3-27b3ab5c35c1" (UID: "9505225f-1412-45d6-96f3-27b3ab5c35c1"). InnerVolumeSpecName "kube-api-access-qxsmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.155206 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9505225f-1412-45d6-96f3-27b3ab5c35c1" (UID: "9505225f-1412-45d6-96f3-27b3ab5c35c1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.160979 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9505225f-1412-45d6-96f3-27b3ab5c35c1" (UID: "9505225f-1412-45d6-96f3-27b3ab5c35c1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.164187 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-inventory" (OuterVolumeSpecName: "inventory") pod "9505225f-1412-45d6-96f3-27b3ab5c35c1" (UID: "9505225f-1412-45d6-96f3-27b3ab5c35c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.229357 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxsmc\" (UniqueName: \"kubernetes.io/projected/9505225f-1412-45d6-96f3-27b3ab5c35c1-kube-api-access-qxsmc\") on node \"crc\" DevicePath \"\"" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.229426 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.229464 4625 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.229478 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.229510 4625 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9505225f-1412-45d6-96f3-27b3ab5c35c1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.600832 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.603435 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s5gxh" event={"ID":"9505225f-1412-45d6-96f3-27b3ab5c35c1","Type":"ContainerDied","Data":"b04f6917e294485f9dffcb389c95ae1d8092a53da62410b849a20591a4e9a549"} Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.603553 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b04f6917e294485f9dffcb389c95ae1d8092a53da62410b849a20591a4e9a549" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.731887 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b"] Dec 02 14:23:34 crc kubenswrapper[4625]: E1202 14:23:34.732413 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9505225f-1412-45d6-96f3-27b3ab5c35c1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.732433 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9505225f-1412-45d6-96f3-27b3ab5c35c1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.732668 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="9505225f-1412-45d6-96f3-27b3ab5c35c1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.733480 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.736561 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.738144 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.738538 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.738794 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.738957 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.739084 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.760240 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b"] Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.851272 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.851446 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4q5\" (UniqueName: \"kubernetes.io/projected/4a01556d-36c3-4d01-9c45-faccb3941b62-kube-api-access-kt4q5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.851506 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.851616 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.851666 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.851690 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.955144 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.955333 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.955382 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.955412 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.955508 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.955588 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt4q5\" (UniqueName: \"kubernetes.io/projected/4a01556d-36c3-4d01-9c45-faccb3941b62-kube-api-access-kt4q5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.961481 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.962156 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.962203 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.962465 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.967870 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:34 crc kubenswrapper[4625]: I1202 14:23:34.978914 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt4q5\" (UniqueName: \"kubernetes.io/projected/4a01556d-36c3-4d01-9c45-faccb3941b62-kube-api-access-kt4q5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:35 crc kubenswrapper[4625]: I1202 14:23:35.051939 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:23:35 crc kubenswrapper[4625]: I1202 14:23:35.803556 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b"] Dec 02 14:23:36 crc kubenswrapper[4625]: I1202 14:23:36.625672 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" event={"ID":"4a01556d-36c3-4d01-9c45-faccb3941b62","Type":"ContainerStarted","Data":"4aa662ae1466d4a53eed1d78779ab4c25dc585bb1081da3a22479c20254b90b3"} Dec 02 14:23:37 crc kubenswrapper[4625]: I1202 14:23:37.640182 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" event={"ID":"4a01556d-36c3-4d01-9c45-faccb3941b62","Type":"ContainerStarted","Data":"18a1af82098f420d685602e1e7253e7ba896677edb230739f129710fa7864d04"} Dec 02 14:23:37 crc kubenswrapper[4625]: I1202 14:23:37.669139 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" podStartSLOduration=3.184112938 podStartE2EDuration="3.669111316s" podCreationTimestamp="2025-12-02 14:23:34 +0000 UTC" firstStartedPulling="2025-12-02 14:23:35.808619222 +0000 UTC m=+2371.770796287" lastFinishedPulling="2025-12-02 14:23:36.29361759 +0000 UTC m=+2372.255794665" observedRunningTime="2025-12-02 14:23:37.668786538 +0000 UTC m=+2373.630963613" watchObservedRunningTime="2025-12-02 14:23:37.669111316 +0000 UTC m=+2373.631288391" Dec 02 14:23:49 crc kubenswrapper[4625]: I1202 14:23:49.271963 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:23:49 crc kubenswrapper[4625]: I1202 14:23:49.272990 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:24:19 crc kubenswrapper[4625]: I1202 14:24:19.271022 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:24:19 crc kubenswrapper[4625]: I1202 14:24:19.273436 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:24:19 crc kubenswrapper[4625]: I1202 14:24:19.273521 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:24:19 crc kubenswrapper[4625]: I1202 14:24:19.274616 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:24:19 crc kubenswrapper[4625]: I1202 14:24:19.274699 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" gracePeriod=600 Dec 02 14:24:19 crc kubenswrapper[4625]: E1202 14:24:19.416900 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:24:20 crc kubenswrapper[4625]: I1202 14:24:20.124743 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" exitCode=0 Dec 02 14:24:20 crc kubenswrapper[4625]: I1202 14:24:20.124854 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee"} Dec 02 14:24:20 crc kubenswrapper[4625]: I1202 14:24:20.125365 4625 scope.go:117] "RemoveContainer" containerID="d3a735a844bd9bd376fbebb84fff6d0aa76d54ffd47aa4fc8c0675440ff0acf9" Dec 02 14:24:20 crc kubenswrapper[4625]: I1202 14:24:20.127006 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:24:20 crc kubenswrapper[4625]: E1202 14:24:20.127576 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:24:32 crc kubenswrapper[4625]: I1202 14:24:32.322829 4625 generic.go:334] "Generic (PLEG): container finished" podID="4a01556d-36c3-4d01-9c45-faccb3941b62" containerID="18a1af82098f420d685602e1e7253e7ba896677edb230739f129710fa7864d04" exitCode=0 Dec 02 14:24:32 crc kubenswrapper[4625]: I1202 14:24:32.323059 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" event={"ID":"4a01556d-36c3-4d01-9c45-faccb3941b62","Type":"ContainerDied","Data":"18a1af82098f420d685602e1e7253e7ba896677edb230739f129710fa7864d04"} Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.827446 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.941627 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-inventory\") pod \"4a01556d-36c3-4d01-9c45-faccb3941b62\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.941729 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-metadata-combined-ca-bundle\") pod \"4a01556d-36c3-4d01-9c45-faccb3941b62\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.941957 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-nova-metadata-neutron-config-0\") pod \"4a01556d-36c3-4d01-9c45-faccb3941b62\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.942915 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4a01556d-36c3-4d01-9c45-faccb3941b62\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.943020 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-ssh-key\") pod \"4a01556d-36c3-4d01-9c45-faccb3941b62\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.943066 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt4q5\" (UniqueName: \"kubernetes.io/projected/4a01556d-36c3-4d01-9c45-faccb3941b62-kube-api-access-kt4q5\") pod \"4a01556d-36c3-4d01-9c45-faccb3941b62\" (UID: \"4a01556d-36c3-4d01-9c45-faccb3941b62\") " Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.949694 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4a01556d-36c3-4d01-9c45-faccb3941b62" (UID: "4a01556d-36c3-4d01-9c45-faccb3941b62"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.952907 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a01556d-36c3-4d01-9c45-faccb3941b62-kube-api-access-kt4q5" (OuterVolumeSpecName: "kube-api-access-kt4q5") pod "4a01556d-36c3-4d01-9c45-faccb3941b62" (UID: "4a01556d-36c3-4d01-9c45-faccb3941b62"). InnerVolumeSpecName "kube-api-access-kt4q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.977459 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4a01556d-36c3-4d01-9c45-faccb3941b62" (UID: "4a01556d-36c3-4d01-9c45-faccb3941b62"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.978692 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4a01556d-36c3-4d01-9c45-faccb3941b62" (UID: "4a01556d-36c3-4d01-9c45-faccb3941b62"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.980879 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4a01556d-36c3-4d01-9c45-faccb3941b62" (UID: "4a01556d-36c3-4d01-9c45-faccb3941b62"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:24:33 crc kubenswrapper[4625]: I1202 14:24:33.981089 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-inventory" (OuterVolumeSpecName: "inventory") pod "4a01556d-36c3-4d01-9c45-faccb3941b62" (UID: "4a01556d-36c3-4d01-9c45-faccb3941b62"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.046906 4625 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.046951 4625 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.046964 4625 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.046979 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.046990 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt4q5\" (UniqueName: \"kubernetes.io/projected/4a01556d-36c3-4d01-9c45-faccb3941b62-kube-api-access-kt4q5\") on node \"crc\" DevicePath \"\"" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.047001 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a01556d-36c3-4d01-9c45-faccb3941b62-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.345725 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" event={"ID":"4a01556d-36c3-4d01-9c45-faccb3941b62","Type":"ContainerDied","Data":"4aa662ae1466d4a53eed1d78779ab4c25dc585bb1081da3a22479c20254b90b3"} Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.345786 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa662ae1466d4a53eed1d78779ab4c25dc585bb1081da3a22479c20254b90b3" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.345827 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.470737 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m"] Dec 02 14:24:34 crc kubenswrapper[4625]: E1202 14:24:34.471424 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a01556d-36c3-4d01-9c45-faccb3941b62" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.471450 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a01556d-36c3-4d01-9c45-faccb3941b62" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.471693 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a01556d-36c3-4d01-9c45-faccb3941b62" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.472560 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.478106 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.478627 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.478907 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.482607 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.483082 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.494195 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m"] Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.557789 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.558386 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.558443 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.558492 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ww75\" (UniqueName: \"kubernetes.io/projected/62d61250-750b-4a2d-b2d6-a5f1b4914da4-kube-api-access-4ww75\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.558605 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.660376 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.660516 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.660623 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.660680 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.660731 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ww75\" (UniqueName: \"kubernetes.io/projected/62d61250-750b-4a2d-b2d6-a5f1b4914da4-kube-api-access-4ww75\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.665392 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.665670 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.665670 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.677276 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.683553 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ww75\" (UniqueName: \"kubernetes.io/projected/62d61250-750b-4a2d-b2d6-a5f1b4914da4-kube-api-access-4ww75\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.793932 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:24:34 crc kubenswrapper[4625]: I1202 14:24:34.869202 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:24:34 crc kubenswrapper[4625]: E1202 14:24:34.869609 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:24:35 crc kubenswrapper[4625]: I1202 14:24:35.395536 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m"] Dec 02 14:24:36 crc kubenswrapper[4625]: I1202 14:24:36.376904 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" event={"ID":"62d61250-750b-4a2d-b2d6-a5f1b4914da4","Type":"ContainerStarted","Data":"51d01f2224a3430de69035c605daa531c6b70b952b830664d8bb2710b800a966"} Dec 02 14:24:36 crc kubenswrapper[4625]: I1202 14:24:36.377370 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" event={"ID":"62d61250-750b-4a2d-b2d6-a5f1b4914da4","Type":"ContainerStarted","Data":"01a6f057e9e335d9ea0737731c1eeba928c1ead60c31ab5fe8e4f4f71342c562"} Dec 02 14:24:36 crc kubenswrapper[4625]: I1202 14:24:36.411133 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" podStartSLOduration=1.7619310129999999 podStartE2EDuration="2.411106215s" podCreationTimestamp="2025-12-02 14:24:34 +0000 UTC" firstStartedPulling="2025-12-02 14:24:35.405509004 +0000 UTC m=+2431.367686079" lastFinishedPulling="2025-12-02 14:24:36.054684206 +0000 UTC m=+2432.016861281" observedRunningTime="2025-12-02 14:24:36.402209484 +0000 UTC m=+2432.364386559" watchObservedRunningTime="2025-12-02 14:24:36.411106215 +0000 UTC m=+2432.373283290" Dec 02 14:24:47 crc kubenswrapper[4625]: I1202 14:24:47.857858 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:24:47 crc kubenswrapper[4625]: E1202 14:24:47.861831 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:25:01 crc kubenswrapper[4625]: I1202 14:25:01.856911 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:25:01 crc kubenswrapper[4625]: E1202 14:25:01.858072 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:25:16 crc kubenswrapper[4625]: I1202 14:25:16.856815 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:25:16 crc kubenswrapper[4625]: E1202 14:25:16.858229 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:25:30 crc kubenswrapper[4625]: I1202 14:25:30.857199 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:25:30 crc kubenswrapper[4625]: E1202 14:25:30.859873 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:25:41 crc kubenswrapper[4625]: I1202 14:25:41.856712 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:25:41 crc kubenswrapper[4625]: E1202 14:25:41.857888 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:25:56 crc kubenswrapper[4625]: I1202 14:25:56.856358 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:25:56 crc kubenswrapper[4625]: E1202 14:25:56.857449 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:26:08 crc kubenswrapper[4625]: I1202 14:26:08.858634 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:26:08 crc kubenswrapper[4625]: E1202 14:26:08.860015 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:26:22 crc kubenswrapper[4625]: I1202 14:26:22.856010 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:26:22 crc kubenswrapper[4625]: E1202 14:26:22.857153 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:26:35 crc kubenswrapper[4625]: I1202 14:26:35.856693 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:26:35 crc kubenswrapper[4625]: E1202 14:26:35.857712 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:26:48 crc kubenswrapper[4625]: I1202 14:26:48.856743 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:26:48 crc kubenswrapper[4625]: E1202 14:26:48.857828 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:27:03 crc kubenswrapper[4625]: I1202 14:27:03.858850 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:27:03 crc kubenswrapper[4625]: E1202 14:27:03.859982 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:27:14 crc kubenswrapper[4625]: I1202 14:27:14.862951 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:27:14 crc kubenswrapper[4625]: E1202 14:27:14.864121 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:27:27 crc kubenswrapper[4625]: I1202 14:27:27.856984 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:27:27 crc kubenswrapper[4625]: E1202 14:27:27.858115 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:27:39 crc kubenswrapper[4625]: I1202 14:27:39.877824 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:27:39 crc kubenswrapper[4625]: E1202 14:27:39.879415 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:27:50 crc kubenswrapper[4625]: I1202 14:27:50.857339 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:27:50 crc kubenswrapper[4625]: E1202 14:27:50.858623 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:28:05 crc kubenswrapper[4625]: I1202 14:28:05.856105 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:28:05 crc kubenswrapper[4625]: E1202 14:28:05.857222 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:28:16 crc kubenswrapper[4625]: I1202 14:28:16.857043 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:28:16 crc kubenswrapper[4625]: E1202 14:28:16.858187 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:28:29 crc kubenswrapper[4625]: I1202 14:28:29.856635 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:28:29 crc kubenswrapper[4625]: E1202 14:28:29.857783 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:28:43 crc kubenswrapper[4625]: I1202 14:28:43.857652 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:28:43 crc kubenswrapper[4625]: E1202 14:28:43.858683 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.015913 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r6hst"] Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.020004 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.065941 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r6hst"] Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.172367 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmf6f\" (UniqueName: \"kubernetes.io/projected/12479119-7eb7-428a-9bd3-ffa19a646723-kube-api-access-qmf6f\") pod \"redhat-operators-r6hst\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.172617 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-utilities\") pod \"redhat-operators-r6hst\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.172679 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-catalog-content\") pod \"redhat-operators-r6hst\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.274733 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-utilities\") pod \"redhat-operators-r6hst\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.274820 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-catalog-content\") pod \"redhat-operators-r6hst\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.274865 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmf6f\" (UniqueName: \"kubernetes.io/projected/12479119-7eb7-428a-9bd3-ffa19a646723-kube-api-access-qmf6f\") pod \"redhat-operators-r6hst\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.275440 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-utilities\") pod \"redhat-operators-r6hst\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.275584 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-catalog-content\") pod \"redhat-operators-r6hst\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.317570 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmf6f\" (UniqueName: \"kubernetes.io/projected/12479119-7eb7-428a-9bd3-ffa19a646723-kube-api-access-qmf6f\") pod \"redhat-operators-r6hst\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.347514 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:28:54 crc kubenswrapper[4625]: W1202 14:28:54.763761 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12479119_7eb7_428a_9bd3_ffa19a646723.slice/crio-237dc3b6b709c7c1c0d01dcc4a001d5804208f64f820d8473ccce379ba1ff887 WatchSource:0}: Error finding container 237dc3b6b709c7c1c0d01dcc4a001d5804208f64f820d8473ccce379ba1ff887: Status 404 returned error can't find the container with id 237dc3b6b709c7c1c0d01dcc4a001d5804208f64f820d8473ccce379ba1ff887 Dec 02 14:28:54 crc kubenswrapper[4625]: I1202 14:28:54.769656 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r6hst"] Dec 02 14:28:55 crc kubenswrapper[4625]: I1202 14:28:55.731489 4625 generic.go:334] "Generic (PLEG): container finished" podID="12479119-7eb7-428a-9bd3-ffa19a646723" containerID="92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d" exitCode=0 Dec 02 14:28:55 crc kubenswrapper[4625]: I1202 14:28:55.731612 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6hst" event={"ID":"12479119-7eb7-428a-9bd3-ffa19a646723","Type":"ContainerDied","Data":"92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d"} Dec 02 14:28:55 crc kubenswrapper[4625]: I1202 14:28:55.731993 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6hst" event={"ID":"12479119-7eb7-428a-9bd3-ffa19a646723","Type":"ContainerStarted","Data":"237dc3b6b709c7c1c0d01dcc4a001d5804208f64f820d8473ccce379ba1ff887"} Dec 02 14:28:55 crc kubenswrapper[4625]: I1202 14:28:55.735002 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:28:55 crc kubenswrapper[4625]: I1202 14:28:55.856687 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:28:55 crc kubenswrapper[4625]: E1202 14:28:55.856996 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:28:57 crc kubenswrapper[4625]: I1202 14:28:57.760889 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6hst" event={"ID":"12479119-7eb7-428a-9bd3-ffa19a646723","Type":"ContainerStarted","Data":"8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5"} Dec 02 14:29:00 crc kubenswrapper[4625]: I1202 14:29:00.799431 4625 generic.go:334] "Generic (PLEG): container finished" podID="12479119-7eb7-428a-9bd3-ffa19a646723" containerID="8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5" exitCode=0 Dec 02 14:29:00 crc kubenswrapper[4625]: I1202 14:29:00.799573 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6hst" event={"ID":"12479119-7eb7-428a-9bd3-ffa19a646723","Type":"ContainerDied","Data":"8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5"} Dec 02 14:29:02 crc kubenswrapper[4625]: I1202 14:29:02.840689 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6hst" event={"ID":"12479119-7eb7-428a-9bd3-ffa19a646723","Type":"ContainerStarted","Data":"8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7"} Dec 02 14:29:04 crc kubenswrapper[4625]: I1202 14:29:04.348735 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:29:04 crc kubenswrapper[4625]: I1202 14:29:04.349337 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:29:05 crc kubenswrapper[4625]: I1202 14:29:05.406263 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r6hst" podUID="12479119-7eb7-428a-9bd3-ffa19a646723" containerName="registry-server" probeResult="failure" output=< Dec 02 14:29:05 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 14:29:05 crc kubenswrapper[4625]: > Dec 02 14:29:10 crc kubenswrapper[4625]: I1202 14:29:10.857696 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:29:10 crc kubenswrapper[4625]: E1202 14:29:10.858785 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:29:14 crc kubenswrapper[4625]: I1202 14:29:14.406667 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:29:14 crc kubenswrapper[4625]: I1202 14:29:14.441506 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r6hst" podStartSLOduration=15.141102284 podStartE2EDuration="21.441450828s" podCreationTimestamp="2025-12-02 14:28:53 +0000 UTC" firstStartedPulling="2025-12-02 14:28:55.734718036 +0000 UTC m=+2691.696895111" lastFinishedPulling="2025-12-02 14:29:02.03506658 +0000 UTC m=+2697.997243655" observedRunningTime="2025-12-02 14:29:02.873534378 +0000 UTC m=+2698.835711453" watchObservedRunningTime="2025-12-02 14:29:14.441450828 +0000 UTC m=+2710.403627903" Dec 02 14:29:14 crc kubenswrapper[4625]: I1202 14:29:14.470575 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:29:14 crc kubenswrapper[4625]: I1202 14:29:14.681176 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r6hst"] Dec 02 14:29:15 crc kubenswrapper[4625]: I1202 14:29:15.981939 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r6hst" podUID="12479119-7eb7-428a-9bd3-ffa19a646723" containerName="registry-server" containerID="cri-o://8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7" gracePeriod=2 Dec 02 14:29:16 crc kubenswrapper[4625]: I1202 14:29:16.489426 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:29:16 crc kubenswrapper[4625]: I1202 14:29:16.687175 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-catalog-content\") pod \"12479119-7eb7-428a-9bd3-ffa19a646723\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " Dec 02 14:29:16 crc kubenswrapper[4625]: I1202 14:29:16.687395 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-utilities\") pod \"12479119-7eb7-428a-9bd3-ffa19a646723\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " Dec 02 14:29:16 crc kubenswrapper[4625]: I1202 14:29:16.687591 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmf6f\" (UniqueName: \"kubernetes.io/projected/12479119-7eb7-428a-9bd3-ffa19a646723-kube-api-access-qmf6f\") pod \"12479119-7eb7-428a-9bd3-ffa19a646723\" (UID: \"12479119-7eb7-428a-9bd3-ffa19a646723\") " Dec 02 14:29:16 crc kubenswrapper[4625]: I1202 14:29:16.688744 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-utilities" (OuterVolumeSpecName: "utilities") pod "12479119-7eb7-428a-9bd3-ffa19a646723" (UID: "12479119-7eb7-428a-9bd3-ffa19a646723"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4625]: I1202 14:29:16.698612 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12479119-7eb7-428a-9bd3-ffa19a646723-kube-api-access-qmf6f" (OuterVolumeSpecName: "kube-api-access-qmf6f") pod "12479119-7eb7-428a-9bd3-ffa19a646723" (UID: "12479119-7eb7-428a-9bd3-ffa19a646723"). InnerVolumeSpecName "kube-api-access-qmf6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4625]: I1202 14:29:16.790589 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4625]: I1202 14:29:16.790650 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmf6f\" (UniqueName: \"kubernetes.io/projected/12479119-7eb7-428a-9bd3-ffa19a646723-kube-api-access-qmf6f\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:16 crc kubenswrapper[4625]: I1202 14:29:16.816973 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12479119-7eb7-428a-9bd3-ffa19a646723" (UID: "12479119-7eb7-428a-9bd3-ffa19a646723"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:29:16 crc kubenswrapper[4625]: I1202 14:29:16.896200 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12479119-7eb7-428a-9bd3-ffa19a646723-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.005267 4625 generic.go:334] "Generic (PLEG): container finished" podID="12479119-7eb7-428a-9bd3-ffa19a646723" containerID="8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7" exitCode=0 Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.005599 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r6hst" Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.006587 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6hst" event={"ID":"12479119-7eb7-428a-9bd3-ffa19a646723","Type":"ContainerDied","Data":"8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7"} Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.007697 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r6hst" event={"ID":"12479119-7eb7-428a-9bd3-ffa19a646723","Type":"ContainerDied","Data":"237dc3b6b709c7c1c0d01dcc4a001d5804208f64f820d8473ccce379ba1ff887"} Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.007753 4625 scope.go:117] "RemoveContainer" containerID="8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7" Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.047608 4625 scope.go:117] "RemoveContainer" containerID="8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5" Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.056519 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r6hst"] Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.069445 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r6hst"] Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.080179 4625 scope.go:117] "RemoveContainer" containerID="92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d" Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.149377 4625 scope.go:117] "RemoveContainer" containerID="8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7" Dec 02 14:29:17 crc kubenswrapper[4625]: E1202 14:29:17.149938 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7\": container with ID starting with 8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7 not found: ID does not exist" containerID="8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7" Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.150013 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7"} err="failed to get container status \"8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7\": rpc error: code = NotFound desc = could not find container \"8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7\": container with ID starting with 8e4ebfae16af476346bdce1711b9f40e66c7e784a3cf88b8b80e1f24daf877b7 not found: ID does not exist" Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.150081 4625 scope.go:117] "RemoveContainer" containerID="8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5" Dec 02 14:29:17 crc kubenswrapper[4625]: E1202 14:29:17.150542 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5\": container with ID starting with 8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5 not found: ID does not exist" containerID="8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5" Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.150585 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5"} err="failed to get container status \"8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5\": rpc error: code = NotFound desc = could not find container \"8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5\": container with ID starting with 8621fc24a672a036739002ab5844f03fb2f201f48ee4e45125039ca99c9a3eb5 not found: ID does not exist" Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.150625 4625 scope.go:117] "RemoveContainer" containerID="92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d" Dec 02 14:29:17 crc kubenswrapper[4625]: E1202 14:29:17.150926 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d\": container with ID starting with 92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d not found: ID does not exist" containerID="92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d" Dec 02 14:29:17 crc kubenswrapper[4625]: I1202 14:29:17.150961 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d"} err="failed to get container status \"92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d\": rpc error: code = NotFound desc = could not find container \"92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d\": container with ID starting with 92ccb451fc6fe8f36ba63fb7fe23d307cf36be8fc3a16a2a621f8678cddd889d not found: ID does not exist" Dec 02 14:29:18 crc kubenswrapper[4625]: I1202 14:29:18.872356 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12479119-7eb7-428a-9bd3-ffa19a646723" path="/var/lib/kubelet/pods/12479119-7eb7-428a-9bd3-ffa19a646723/volumes" Dec 02 14:29:20 crc kubenswrapper[4625]: I1202 14:29:20.078271 4625 generic.go:334] "Generic (PLEG): container finished" podID="62d61250-750b-4a2d-b2d6-a5f1b4914da4" containerID="51d01f2224a3430de69035c605daa531c6b70b952b830664d8bb2710b800a966" exitCode=0 Dec 02 14:29:20 crc kubenswrapper[4625]: I1202 14:29:20.078406 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" event={"ID":"62d61250-750b-4a2d-b2d6-a5f1b4914da4","Type":"ContainerDied","Data":"51d01f2224a3430de69035c605daa531c6b70b952b830664d8bb2710b800a966"} Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.609519 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.650511 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-combined-ca-bundle\") pod \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.650792 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ww75\" (UniqueName: \"kubernetes.io/projected/62d61250-750b-4a2d-b2d6-a5f1b4914da4-kube-api-access-4ww75\") pod \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.650877 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-inventory\") pod \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.651001 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-secret-0\") pod \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.651119 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-ssh-key\") pod \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\" (UID: \"62d61250-750b-4a2d-b2d6-a5f1b4914da4\") " Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.659244 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "62d61250-750b-4a2d-b2d6-a5f1b4914da4" (UID: "62d61250-750b-4a2d-b2d6-a5f1b4914da4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.668375 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d61250-750b-4a2d-b2d6-a5f1b4914da4-kube-api-access-4ww75" (OuterVolumeSpecName: "kube-api-access-4ww75") pod "62d61250-750b-4a2d-b2d6-a5f1b4914da4" (UID: "62d61250-750b-4a2d-b2d6-a5f1b4914da4"). InnerVolumeSpecName "kube-api-access-4ww75". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.689178 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "62d61250-750b-4a2d-b2d6-a5f1b4914da4" (UID: "62d61250-750b-4a2d-b2d6-a5f1b4914da4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.696539 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-inventory" (OuterVolumeSpecName: "inventory") pod "62d61250-750b-4a2d-b2d6-a5f1b4914da4" (UID: "62d61250-750b-4a2d-b2d6-a5f1b4914da4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.699396 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "62d61250-750b-4a2d-b2d6-a5f1b4914da4" (UID: "62d61250-750b-4a2d-b2d6-a5f1b4914da4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.753405 4625 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.753456 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.753474 4625 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.753494 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ww75\" (UniqueName: \"kubernetes.io/projected/62d61250-750b-4a2d-b2d6-a5f1b4914da4-kube-api-access-4ww75\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:21 crc kubenswrapper[4625]: I1202 14:29:21.753564 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d61250-750b-4a2d-b2d6-a5f1b4914da4-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.104616 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" event={"ID":"62d61250-750b-4a2d-b2d6-a5f1b4914da4","Type":"ContainerDied","Data":"01a6f057e9e335d9ea0737731c1eeba928c1ead60c31ab5fe8e4f4f71342c562"} Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.104690 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.104672 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a6f057e9e335d9ea0737731c1eeba928c1ead60c31ab5fe8e4f4f71342c562" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.346107 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq"] Dec 02 14:29:22 crc kubenswrapper[4625]: E1202 14:29:22.346723 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12479119-7eb7-428a-9bd3-ffa19a646723" containerName="extract-utilities" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.346793 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="12479119-7eb7-428a-9bd3-ffa19a646723" containerName="extract-utilities" Dec 02 14:29:22 crc kubenswrapper[4625]: E1202 14:29:22.346823 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12479119-7eb7-428a-9bd3-ffa19a646723" containerName="registry-server" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.346835 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="12479119-7eb7-428a-9bd3-ffa19a646723" containerName="registry-server" Dec 02 14:29:22 crc kubenswrapper[4625]: E1202 14:29:22.346901 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d61250-750b-4a2d-b2d6-a5f1b4914da4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.346913 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d61250-750b-4a2d-b2d6-a5f1b4914da4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 02 14:29:22 crc kubenswrapper[4625]: E1202 14:29:22.346971 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12479119-7eb7-428a-9bd3-ffa19a646723" containerName="extract-content" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.346981 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="12479119-7eb7-428a-9bd3-ffa19a646723" containerName="extract-content" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.347419 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="62d61250-750b-4a2d-b2d6-a5f1b4914da4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.347441 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="12479119-7eb7-428a-9bd3-ffa19a646723" containerName="registry-server" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.348872 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.356819 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.357114 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.357412 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.357600 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.357915 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.358340 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.359058 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.366736 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq"] Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.471455 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.472339 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.472465 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.472567 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.472926 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.473077 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.473208 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hx49\" (UniqueName: \"kubernetes.io/projected/c531f95a-508b-48ea-bfb7-91659bd6df10-kube-api-access-9hx49\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.473346 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.473432 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.576044 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.576469 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.576633 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.576724 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.576805 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.576903 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.577026 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.577115 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.577828 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hx49\" (UniqueName: \"kubernetes.io/projected/c531f95a-508b-48ea-bfb7-91659bd6df10-kube-api-access-9hx49\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.581080 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.584869 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.586302 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.586618 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.587546 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.590905 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.593932 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.594348 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.599913 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hx49\" (UniqueName: \"kubernetes.io/projected/c531f95a-508b-48ea-bfb7-91659bd6df10-kube-api-access-9hx49\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qxsfq\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:22 crc kubenswrapper[4625]: I1202 14:29:22.672467 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:29:23 crc kubenswrapper[4625]: I1202 14:29:23.297080 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq"] Dec 02 14:29:24 crc kubenswrapper[4625]: I1202 14:29:24.153978 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" event={"ID":"c531f95a-508b-48ea-bfb7-91659bd6df10","Type":"ContainerStarted","Data":"bd99e8cfd7b27141877218d6496e238b69287c7d5af148748a586e5ac1ea70c6"} Dec 02 14:29:24 crc kubenswrapper[4625]: I1202 14:29:24.155500 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" event={"ID":"c531f95a-508b-48ea-bfb7-91659bd6df10","Type":"ContainerStarted","Data":"a898838c4aa85d3c75071d6199109106cf41cd1d7107a435848dfad033c44128"} Dec 02 14:29:25 crc kubenswrapper[4625]: I1202 14:29:25.856485 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:29:26 crc kubenswrapper[4625]: I1202 14:29:26.177575 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"e1221f82188e7b3ced065d921a5d009af9803ddf85badbe077fcaa28988a9c41"} Dec 02 14:29:26 crc kubenswrapper[4625]: I1202 14:29:26.209708 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" podStartSLOduration=3.687827608 podStartE2EDuration="4.209682306s" podCreationTimestamp="2025-12-02 14:29:22 +0000 UTC" firstStartedPulling="2025-12-02 14:29:23.313332031 +0000 UTC m=+2719.275509096" lastFinishedPulling="2025-12-02 14:29:23.835186719 +0000 UTC m=+2719.797363794" observedRunningTime="2025-12-02 14:29:24.183993383 +0000 UTC m=+2720.146170498" watchObservedRunningTime="2025-12-02 14:29:26.209682306 +0000 UTC m=+2722.171859381" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.172605 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp"] Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.188064 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp"] Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.188573 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.193858 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.194113 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.331749 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3dd164e-dc52-4dee-afe2-4042f69ffa85-secret-volume\") pod \"collect-profiles-29411430-mkzbp\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.331854 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3dd164e-dc52-4dee-afe2-4042f69ffa85-config-volume\") pod \"collect-profiles-29411430-mkzbp\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.331942 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvc7n\" (UniqueName: \"kubernetes.io/projected/f3dd164e-dc52-4dee-afe2-4042f69ffa85-kube-api-access-tvc7n\") pod \"collect-profiles-29411430-mkzbp\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.434054 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc7n\" (UniqueName: \"kubernetes.io/projected/f3dd164e-dc52-4dee-afe2-4042f69ffa85-kube-api-access-tvc7n\") pod \"collect-profiles-29411430-mkzbp\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.434181 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3dd164e-dc52-4dee-afe2-4042f69ffa85-secret-volume\") pod \"collect-profiles-29411430-mkzbp\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.434250 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3dd164e-dc52-4dee-afe2-4042f69ffa85-config-volume\") pod \"collect-profiles-29411430-mkzbp\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.435440 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3dd164e-dc52-4dee-afe2-4042f69ffa85-config-volume\") pod \"collect-profiles-29411430-mkzbp\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.444567 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3dd164e-dc52-4dee-afe2-4042f69ffa85-secret-volume\") pod \"collect-profiles-29411430-mkzbp\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.476852 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvc7n\" (UniqueName: \"kubernetes.io/projected/f3dd164e-dc52-4dee-afe2-4042f69ffa85-kube-api-access-tvc7n\") pod \"collect-profiles-29411430-mkzbp\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:00 crc kubenswrapper[4625]: I1202 14:30:00.526407 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:01 crc kubenswrapper[4625]: I1202 14:30:01.218948 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp"] Dec 02 14:30:01 crc kubenswrapper[4625]: I1202 14:30:01.597528 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" event={"ID":"f3dd164e-dc52-4dee-afe2-4042f69ffa85","Type":"ContainerStarted","Data":"e8c1f81ddf2b14b5d1b42073c2afd5a53dbd24a26d5329cc0b5f0e16b35cca83"} Dec 02 14:30:01 crc kubenswrapper[4625]: I1202 14:30:01.598038 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" event={"ID":"f3dd164e-dc52-4dee-afe2-4042f69ffa85","Type":"ContainerStarted","Data":"81e6ecd7ec0dd8a32744f695ede4d1478c4b48dc4a1337e5f6d1aeecd7fbcec3"} Dec 02 14:30:01 crc kubenswrapper[4625]: I1202 14:30:01.634802 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" podStartSLOduration=1.6347717419999999 podStartE2EDuration="1.634771742s" podCreationTimestamp="2025-12-02 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 14:30:01.629266095 +0000 UTC m=+2757.591443170" watchObservedRunningTime="2025-12-02 14:30:01.634771742 +0000 UTC m=+2757.596948817" Dec 02 14:30:02 crc kubenswrapper[4625]: I1202 14:30:02.620366 4625 generic.go:334] "Generic (PLEG): container finished" podID="f3dd164e-dc52-4dee-afe2-4042f69ffa85" containerID="e8c1f81ddf2b14b5d1b42073c2afd5a53dbd24a26d5329cc0b5f0e16b35cca83" exitCode=0 Dec 02 14:30:02 crc kubenswrapper[4625]: I1202 14:30:02.620468 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" event={"ID":"f3dd164e-dc52-4dee-afe2-4042f69ffa85","Type":"ContainerDied","Data":"e8c1f81ddf2b14b5d1b42073c2afd5a53dbd24a26d5329cc0b5f0e16b35cca83"} Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.019823 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.151853 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3dd164e-dc52-4dee-afe2-4042f69ffa85-secret-volume\") pod \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.152494 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvc7n\" (UniqueName: \"kubernetes.io/projected/f3dd164e-dc52-4dee-afe2-4042f69ffa85-kube-api-access-tvc7n\") pod \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.152543 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3dd164e-dc52-4dee-afe2-4042f69ffa85-config-volume\") pod \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\" (UID: \"f3dd164e-dc52-4dee-afe2-4042f69ffa85\") " Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.153611 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dd164e-dc52-4dee-afe2-4042f69ffa85-config-volume" (OuterVolumeSpecName: "config-volume") pod "f3dd164e-dc52-4dee-afe2-4042f69ffa85" (UID: "f3dd164e-dc52-4dee-afe2-4042f69ffa85"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.165700 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3dd164e-dc52-4dee-afe2-4042f69ffa85-kube-api-access-tvc7n" (OuterVolumeSpecName: "kube-api-access-tvc7n") pod "f3dd164e-dc52-4dee-afe2-4042f69ffa85" (UID: "f3dd164e-dc52-4dee-afe2-4042f69ffa85"). InnerVolumeSpecName "kube-api-access-tvc7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.173593 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3dd164e-dc52-4dee-afe2-4042f69ffa85-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f3dd164e-dc52-4dee-afe2-4042f69ffa85" (UID: "f3dd164e-dc52-4dee-afe2-4042f69ffa85"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.255872 4625 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3dd164e-dc52-4dee-afe2-4042f69ffa85-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.256410 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvc7n\" (UniqueName: \"kubernetes.io/projected/f3dd164e-dc52-4dee-afe2-4042f69ffa85-kube-api-access-tvc7n\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.256501 4625 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3dd164e-dc52-4dee-afe2-4042f69ffa85-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.306984 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44"] Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.317271 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411385-jvp44"] Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.648004 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" event={"ID":"f3dd164e-dc52-4dee-afe2-4042f69ffa85","Type":"ContainerDied","Data":"81e6ecd7ec0dd8a32744f695ede4d1478c4b48dc4a1337e5f6d1aeecd7fbcec3"} Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.648079 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81e6ecd7ec0dd8a32744f695ede4d1478c4b48dc4a1337e5f6d1aeecd7fbcec3" Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.648116 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp" Dec 02 14:30:04 crc kubenswrapper[4625]: I1202 14:30:04.875080 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8bad892-59d1-45b5-a388-156353675860" path="/var/lib/kubelet/pods/c8bad892-59d1-45b5-a388-156353675860/volumes" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.558398 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fk2gf"] Dec 02 14:30:18 crc kubenswrapper[4625]: E1202 14:30:18.566585 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3dd164e-dc52-4dee-afe2-4042f69ffa85" containerName="collect-profiles" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.566641 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3dd164e-dc52-4dee-afe2-4042f69ffa85" containerName="collect-profiles" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.567990 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3dd164e-dc52-4dee-afe2-4042f69ffa85" containerName="collect-profiles" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.615673 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fk2gf"] Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.615888 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.742033 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbtqs\" (UniqueName: \"kubernetes.io/projected/9959b35b-d101-4f2e-915c-a6a51c5848e3-kube-api-access-fbtqs\") pod \"certified-operators-fk2gf\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.742126 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-utilities\") pod \"certified-operators-fk2gf\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.742254 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-catalog-content\") pod \"certified-operators-fk2gf\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.844419 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-catalog-content\") pod \"certified-operators-fk2gf\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.844516 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbtqs\" (UniqueName: \"kubernetes.io/projected/9959b35b-d101-4f2e-915c-a6a51c5848e3-kube-api-access-fbtqs\") pod \"certified-operators-fk2gf\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.844552 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-utilities\") pod \"certified-operators-fk2gf\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.845141 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-utilities\") pod \"certified-operators-fk2gf\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.845540 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-catalog-content\") pod \"certified-operators-fk2gf\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.873409 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbtqs\" (UniqueName: \"kubernetes.io/projected/9959b35b-d101-4f2e-915c-a6a51c5848e3-kube-api-access-fbtqs\") pod \"certified-operators-fk2gf\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:18 crc kubenswrapper[4625]: I1202 14:30:18.986302 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:19 crc kubenswrapper[4625]: I1202 14:30:19.621034 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fk2gf"] Dec 02 14:30:19 crc kubenswrapper[4625]: I1202 14:30:19.815027 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2gf" event={"ID":"9959b35b-d101-4f2e-915c-a6a51c5848e3","Type":"ContainerStarted","Data":"71c75b880bff7e39f7a2b15c188850deadf2b1f68094b06a38426fcd21d857fd"} Dec 02 14:30:20 crc kubenswrapper[4625]: I1202 14:30:20.827271 4625 generic.go:334] "Generic (PLEG): container finished" podID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerID="4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2" exitCode=0 Dec 02 14:30:20 crc kubenswrapper[4625]: I1202 14:30:20.827397 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2gf" event={"ID":"9959b35b-d101-4f2e-915c-a6a51c5848e3","Type":"ContainerDied","Data":"4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2"} Dec 02 14:30:24 crc kubenswrapper[4625]: I1202 14:30:24.872183 4625 generic.go:334] "Generic (PLEG): container finished" podID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerID="d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb" exitCode=0 Dec 02 14:30:24 crc kubenswrapper[4625]: I1202 14:30:24.876336 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2gf" event={"ID":"9959b35b-d101-4f2e-915c-a6a51c5848e3","Type":"ContainerDied","Data":"d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb"} Dec 02 14:30:25 crc kubenswrapper[4625]: I1202 14:30:25.889575 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2gf" event={"ID":"9959b35b-d101-4f2e-915c-a6a51c5848e3","Type":"ContainerStarted","Data":"d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d"} Dec 02 14:30:28 crc kubenswrapper[4625]: I1202 14:30:28.988638 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:28 crc kubenswrapper[4625]: I1202 14:30:28.991221 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:29 crc kubenswrapper[4625]: I1202 14:30:29.045988 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:29 crc kubenswrapper[4625]: I1202 14:30:29.076340 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fk2gf" podStartSLOduration=6.407495743 podStartE2EDuration="11.076296889s" podCreationTimestamp="2025-12-02 14:30:18 +0000 UTC" firstStartedPulling="2025-12-02 14:30:20.830620956 +0000 UTC m=+2776.792798031" lastFinishedPulling="2025-12-02 14:30:25.499422112 +0000 UTC m=+2781.461599177" observedRunningTime="2025-12-02 14:30:25.910722299 +0000 UTC m=+2781.872899384" watchObservedRunningTime="2025-12-02 14:30:29.076296889 +0000 UTC m=+2785.038473964" Dec 02 14:30:31 crc kubenswrapper[4625]: I1202 14:30:31.026361 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:31 crc kubenswrapper[4625]: I1202 14:30:31.104966 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fk2gf"] Dec 02 14:30:32 crc kubenswrapper[4625]: I1202 14:30:32.969558 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fk2gf" podUID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerName="registry-server" containerID="cri-o://d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d" gracePeriod=2 Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.553558 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.719812 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-catalog-content\") pod \"9959b35b-d101-4f2e-915c-a6a51c5848e3\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.720442 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-utilities\") pod \"9959b35b-d101-4f2e-915c-a6a51c5848e3\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.720498 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbtqs\" (UniqueName: \"kubernetes.io/projected/9959b35b-d101-4f2e-915c-a6a51c5848e3-kube-api-access-fbtqs\") pod \"9959b35b-d101-4f2e-915c-a6a51c5848e3\" (UID: \"9959b35b-d101-4f2e-915c-a6a51c5848e3\") " Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.721594 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-utilities" (OuterVolumeSpecName: "utilities") pod "9959b35b-d101-4f2e-915c-a6a51c5848e3" (UID: "9959b35b-d101-4f2e-915c-a6a51c5848e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.739904 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9959b35b-d101-4f2e-915c-a6a51c5848e3-kube-api-access-fbtqs" (OuterVolumeSpecName: "kube-api-access-fbtqs") pod "9959b35b-d101-4f2e-915c-a6a51c5848e3" (UID: "9959b35b-d101-4f2e-915c-a6a51c5848e3"). InnerVolumeSpecName "kube-api-access-fbtqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.789546 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9959b35b-d101-4f2e-915c-a6a51c5848e3" (UID: "9959b35b-d101-4f2e-915c-a6a51c5848e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.824189 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.824250 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbtqs\" (UniqueName: \"kubernetes.io/projected/9959b35b-d101-4f2e-915c-a6a51c5848e3-kube-api-access-fbtqs\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.824268 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9959b35b-d101-4f2e-915c-a6a51c5848e3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.984507 4625 generic.go:334] "Generic (PLEG): container finished" podID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerID="d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d" exitCode=0 Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.984578 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2gf" event={"ID":"9959b35b-d101-4f2e-915c-a6a51c5848e3","Type":"ContainerDied","Data":"d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d"} Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.984608 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fk2gf" Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.984629 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fk2gf" event={"ID":"9959b35b-d101-4f2e-915c-a6a51c5848e3","Type":"ContainerDied","Data":"71c75b880bff7e39f7a2b15c188850deadf2b1f68094b06a38426fcd21d857fd"} Dec 02 14:30:33 crc kubenswrapper[4625]: I1202 14:30:33.984662 4625 scope.go:117] "RemoveContainer" containerID="d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d" Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.014547 4625 scope.go:117] "RemoveContainer" containerID="d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb" Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.025752 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fk2gf"] Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.042351 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fk2gf"] Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.061726 4625 scope.go:117] "RemoveContainer" containerID="4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2" Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.111570 4625 scope.go:117] "RemoveContainer" containerID="d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d" Dec 02 14:30:34 crc kubenswrapper[4625]: E1202 14:30:34.112256 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d\": container with ID starting with d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d not found: ID does not exist" containerID="d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d" Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.112332 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d"} err="failed to get container status \"d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d\": rpc error: code = NotFound desc = could not find container \"d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d\": container with ID starting with d9a60ebf9585bf25977fd0e7888e7c85ed072064cc4ed0c366c77c69c5c3275d not found: ID does not exist" Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.112368 4625 scope.go:117] "RemoveContainer" containerID="d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb" Dec 02 14:30:34 crc kubenswrapper[4625]: E1202 14:30:34.112758 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb\": container with ID starting with d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb not found: ID does not exist" containerID="d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb" Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.112812 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb"} err="failed to get container status \"d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb\": rpc error: code = NotFound desc = could not find container \"d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb\": container with ID starting with d13a0ca4a820e1ead5363b8f546cf7a68484c9a18c07550e87f18390c6bb04bb not found: ID does not exist" Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.112833 4625 scope.go:117] "RemoveContainer" containerID="4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2" Dec 02 14:30:34 crc kubenswrapper[4625]: E1202 14:30:34.113113 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2\": container with ID starting with 4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2 not found: ID does not exist" containerID="4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2" Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.113143 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2"} err="failed to get container status \"4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2\": rpc error: code = NotFound desc = could not find container \"4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2\": container with ID starting with 4ea86b6c1acf2dabbf0025b4c6bdd4df1eda988281412c5faf4354ce537f8ab2 not found: ID does not exist" Dec 02 14:30:34 crc kubenswrapper[4625]: I1202 14:30:34.875727 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9959b35b-d101-4f2e-915c-a6a51c5848e3" path="/var/lib/kubelet/pods/9959b35b-d101-4f2e-915c-a6a51c5848e3/volumes" Dec 02 14:30:37 crc kubenswrapper[4625]: I1202 14:30:37.153749 4625 scope.go:117] "RemoveContainer" containerID="8d030dcfc52bd3a37718d1d50f8ce82c519aaf5871fc1b47e3aae65252c619df" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.402126 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7q89c"] Dec 02 14:31:02 crc kubenswrapper[4625]: E1202 14:31:02.403853 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerName="registry-server" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.403881 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerName="registry-server" Dec 02 14:31:02 crc kubenswrapper[4625]: E1202 14:31:02.403919 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerName="extract-utilities" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.403931 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerName="extract-utilities" Dec 02 14:31:02 crc kubenswrapper[4625]: E1202 14:31:02.403972 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerName="extract-content" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.403983 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerName="extract-content" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.404498 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="9959b35b-d101-4f2e-915c-a6a51c5848e3" containerName="registry-server" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.407237 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.416483 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q89c"] Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.506467 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-utilities\") pod \"redhat-marketplace-7q89c\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.507188 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-catalog-content\") pod \"redhat-marketplace-7q89c\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.507273 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlg4h\" (UniqueName: \"kubernetes.io/projected/2ed379ea-8580-41a6-8077-6d2347d13937-kube-api-access-qlg4h\") pod \"redhat-marketplace-7q89c\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.609981 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-catalog-content\") pod \"redhat-marketplace-7q89c\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.610031 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlg4h\" (UniqueName: \"kubernetes.io/projected/2ed379ea-8580-41a6-8077-6d2347d13937-kube-api-access-qlg4h\") pod \"redhat-marketplace-7q89c\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.610186 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-utilities\") pod \"redhat-marketplace-7q89c\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.610854 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-utilities\") pod \"redhat-marketplace-7q89c\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.612143 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-catalog-content\") pod \"redhat-marketplace-7q89c\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.640955 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlg4h\" (UniqueName: \"kubernetes.io/projected/2ed379ea-8580-41a6-8077-6d2347d13937-kube-api-access-qlg4h\") pod \"redhat-marketplace-7q89c\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:02 crc kubenswrapper[4625]: I1202 14:31:02.738944 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:03 crc kubenswrapper[4625]: I1202 14:31:03.317249 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q89c"] Dec 02 14:31:04 crc kubenswrapper[4625]: I1202 14:31:04.327055 4625 generic.go:334] "Generic (PLEG): container finished" podID="2ed379ea-8580-41a6-8077-6d2347d13937" containerID="a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5" exitCode=0 Dec 02 14:31:04 crc kubenswrapper[4625]: I1202 14:31:04.327129 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q89c" event={"ID":"2ed379ea-8580-41a6-8077-6d2347d13937","Type":"ContainerDied","Data":"a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5"} Dec 02 14:31:04 crc kubenswrapper[4625]: I1202 14:31:04.327756 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q89c" event={"ID":"2ed379ea-8580-41a6-8077-6d2347d13937","Type":"ContainerStarted","Data":"2a76f0333903c090b8eba800645dcf276840e21a148c16e9b7113c9060cccb5f"} Dec 02 14:31:06 crc kubenswrapper[4625]: I1202 14:31:06.356106 4625 generic.go:334] "Generic (PLEG): container finished" podID="2ed379ea-8580-41a6-8077-6d2347d13937" containerID="17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee" exitCode=0 Dec 02 14:31:06 crc kubenswrapper[4625]: I1202 14:31:06.356255 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q89c" event={"ID":"2ed379ea-8580-41a6-8077-6d2347d13937","Type":"ContainerDied","Data":"17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee"} Dec 02 14:31:10 crc kubenswrapper[4625]: I1202 14:31:10.401543 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q89c" event={"ID":"2ed379ea-8580-41a6-8077-6d2347d13937","Type":"ContainerStarted","Data":"b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da"} Dec 02 14:31:11 crc kubenswrapper[4625]: I1202 14:31:11.450650 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7q89c" podStartSLOduration=5.324854568 podStartE2EDuration="9.450617753s" podCreationTimestamp="2025-12-02 14:31:02 +0000 UTC" firstStartedPulling="2025-12-02 14:31:04.330385463 +0000 UTC m=+2820.292562538" lastFinishedPulling="2025-12-02 14:31:08.456148648 +0000 UTC m=+2824.418325723" observedRunningTime="2025-12-02 14:31:11.43846323 +0000 UTC m=+2827.400640305" watchObservedRunningTime="2025-12-02 14:31:11.450617753 +0000 UTC m=+2827.412794828" Dec 02 14:31:12 crc kubenswrapper[4625]: I1202 14:31:12.739731 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:12 crc kubenswrapper[4625]: I1202 14:31:12.739817 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:12 crc kubenswrapper[4625]: I1202 14:31:12.815853 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:22 crc kubenswrapper[4625]: I1202 14:31:22.791501 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:22 crc kubenswrapper[4625]: I1202 14:31:22.870011 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q89c"] Dec 02 14:31:23 crc kubenswrapper[4625]: I1202 14:31:23.574223 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7q89c" podUID="2ed379ea-8580-41a6-8077-6d2347d13937" containerName="registry-server" containerID="cri-o://b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da" gracePeriod=2 Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.063494 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.165616 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlg4h\" (UniqueName: \"kubernetes.io/projected/2ed379ea-8580-41a6-8077-6d2347d13937-kube-api-access-qlg4h\") pod \"2ed379ea-8580-41a6-8077-6d2347d13937\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.165710 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-utilities\") pod \"2ed379ea-8580-41a6-8077-6d2347d13937\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.165812 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-catalog-content\") pod \"2ed379ea-8580-41a6-8077-6d2347d13937\" (UID: \"2ed379ea-8580-41a6-8077-6d2347d13937\") " Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.167403 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-utilities" (OuterVolumeSpecName: "utilities") pod "2ed379ea-8580-41a6-8077-6d2347d13937" (UID: "2ed379ea-8580-41a6-8077-6d2347d13937"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.172973 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed379ea-8580-41a6-8077-6d2347d13937-kube-api-access-qlg4h" (OuterVolumeSpecName: "kube-api-access-qlg4h") pod "2ed379ea-8580-41a6-8077-6d2347d13937" (UID: "2ed379ea-8580-41a6-8077-6d2347d13937"). InnerVolumeSpecName "kube-api-access-qlg4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.188486 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ed379ea-8580-41a6-8077-6d2347d13937" (UID: "2ed379ea-8580-41a6-8077-6d2347d13937"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.269174 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlg4h\" (UniqueName: \"kubernetes.io/projected/2ed379ea-8580-41a6-8077-6d2347d13937-kube-api-access-qlg4h\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.269216 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.269257 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed379ea-8580-41a6-8077-6d2347d13937-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.584649 4625 generic.go:334] "Generic (PLEG): container finished" podID="2ed379ea-8580-41a6-8077-6d2347d13937" containerID="b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da" exitCode=0 Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.584704 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q89c" event={"ID":"2ed379ea-8580-41a6-8077-6d2347d13937","Type":"ContainerDied","Data":"b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da"} Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.584730 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q89c" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.584752 4625 scope.go:117] "RemoveContainer" containerID="b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.584738 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q89c" event={"ID":"2ed379ea-8580-41a6-8077-6d2347d13937","Type":"ContainerDied","Data":"2a76f0333903c090b8eba800645dcf276840e21a148c16e9b7113c9060cccb5f"} Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.634462 4625 scope.go:117] "RemoveContainer" containerID="17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.634463 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q89c"] Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.643983 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q89c"] Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.655509 4625 scope.go:117] "RemoveContainer" containerID="a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.715767 4625 scope.go:117] "RemoveContainer" containerID="b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da" Dec 02 14:31:24 crc kubenswrapper[4625]: E1202 14:31:24.716207 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da\": container with ID starting with b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da not found: ID does not exist" containerID="b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.716241 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da"} err="failed to get container status \"b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da\": rpc error: code = NotFound desc = could not find container \"b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da\": container with ID starting with b0f7149956eccd89d25b2a18c574aeeb3df4b58547e27a1dbb8a8b67dc1994da not found: ID does not exist" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.716269 4625 scope.go:117] "RemoveContainer" containerID="17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee" Dec 02 14:31:24 crc kubenswrapper[4625]: E1202 14:31:24.717409 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee\": container with ID starting with 17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee not found: ID does not exist" containerID="17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.717469 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee"} err="failed to get container status \"17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee\": rpc error: code = NotFound desc = could not find container \"17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee\": container with ID starting with 17d73a52158b5d49dfe489e5a07e82aa5291ae512e9c651da2875d9c492ac3ee not found: ID does not exist" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.719826 4625 scope.go:117] "RemoveContainer" containerID="a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5" Dec 02 14:31:24 crc kubenswrapper[4625]: E1202 14:31:24.720293 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5\": container with ID starting with a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5 not found: ID does not exist" containerID="a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.720355 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5"} err="failed to get container status \"a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5\": rpc error: code = NotFound desc = could not find container \"a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5\": container with ID starting with a2adf1a966480c34caba797bc7e4142723c03b961794868512f274b8b4444eb5 not found: ID does not exist" Dec 02 14:31:24 crc kubenswrapper[4625]: I1202 14:31:24.870606 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed379ea-8580-41a6-8077-6d2347d13937" path="/var/lib/kubelet/pods/2ed379ea-8580-41a6-8077-6d2347d13937/volumes" Dec 02 14:31:49 crc kubenswrapper[4625]: I1202 14:31:49.271611 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:31:49 crc kubenswrapper[4625]: I1202 14:31:49.274043 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:32:19 crc kubenswrapper[4625]: I1202 14:32:19.272024 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:32:19 crc kubenswrapper[4625]: I1202 14:32:19.272988 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.552328 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wpzzz"] Dec 02 14:32:20 crc kubenswrapper[4625]: E1202 14:32:20.553529 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed379ea-8580-41a6-8077-6d2347d13937" containerName="extract-content" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.553554 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed379ea-8580-41a6-8077-6d2347d13937" containerName="extract-content" Dec 02 14:32:20 crc kubenswrapper[4625]: E1202 14:32:20.553600 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed379ea-8580-41a6-8077-6d2347d13937" containerName="extract-utilities" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.553612 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed379ea-8580-41a6-8077-6d2347d13937" containerName="extract-utilities" Dec 02 14:32:20 crc kubenswrapper[4625]: E1202 14:32:20.553622 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed379ea-8580-41a6-8077-6d2347d13937" containerName="registry-server" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.553630 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed379ea-8580-41a6-8077-6d2347d13937" containerName="registry-server" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.553932 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed379ea-8580-41a6-8077-6d2347d13937" containerName="registry-server" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.556067 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.596183 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wpzzz"] Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.630516 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2896c3fb-6a7b-41c8-816f-b4ee6ee231fc-catalog-content\") pod \"community-operators-wpzzz\" (UID: \"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc\") " pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.631045 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cllhr\" (UniqueName: \"kubernetes.io/projected/2896c3fb-6a7b-41c8-816f-b4ee6ee231fc-kube-api-access-cllhr\") pod \"community-operators-wpzzz\" (UID: \"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc\") " pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.631472 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2896c3fb-6a7b-41c8-816f-b4ee6ee231fc-utilities\") pod \"community-operators-wpzzz\" (UID: \"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc\") " pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.732576 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cllhr\" (UniqueName: \"kubernetes.io/projected/2896c3fb-6a7b-41c8-816f-b4ee6ee231fc-kube-api-access-cllhr\") pod \"community-operators-wpzzz\" (UID: \"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc\") " pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.732645 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2896c3fb-6a7b-41c8-816f-b4ee6ee231fc-utilities\") pod \"community-operators-wpzzz\" (UID: \"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc\") " pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.732751 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2896c3fb-6a7b-41c8-816f-b4ee6ee231fc-catalog-content\") pod \"community-operators-wpzzz\" (UID: \"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc\") " pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.733348 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2896c3fb-6a7b-41c8-816f-b4ee6ee231fc-catalog-content\") pod \"community-operators-wpzzz\" (UID: \"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc\") " pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.734545 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2896c3fb-6a7b-41c8-816f-b4ee6ee231fc-utilities\") pod \"community-operators-wpzzz\" (UID: \"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc\") " pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.764617 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cllhr\" (UniqueName: \"kubernetes.io/projected/2896c3fb-6a7b-41c8-816f-b4ee6ee231fc-kube-api-access-cllhr\") pod \"community-operators-wpzzz\" (UID: \"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc\") " pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:20 crc kubenswrapper[4625]: I1202 14:32:20.882408 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:21 crc kubenswrapper[4625]: I1202 14:32:21.597389 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wpzzz"] Dec 02 14:32:22 crc kubenswrapper[4625]: I1202 14:32:22.213956 4625 generic.go:334] "Generic (PLEG): container finished" podID="2896c3fb-6a7b-41c8-816f-b4ee6ee231fc" containerID="8a23d7c97edae9257c8d010925303e39b85caf9b96f7716143561b7f95351800" exitCode=0 Dec 02 14:32:22 crc kubenswrapper[4625]: I1202 14:32:22.214102 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpzzz" event={"ID":"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc","Type":"ContainerDied","Data":"8a23d7c97edae9257c8d010925303e39b85caf9b96f7716143561b7f95351800"} Dec 02 14:32:22 crc kubenswrapper[4625]: I1202 14:32:22.214962 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpzzz" event={"ID":"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc","Type":"ContainerStarted","Data":"dda2ea9c997446abb479f26240e9e0c4a7873e8a2469ecd64ae685bcfeff6acd"} Dec 02 14:32:28 crc kubenswrapper[4625]: I1202 14:32:28.279668 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpzzz" event={"ID":"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc","Type":"ContainerStarted","Data":"6efcd3bdc0a1ef47fa1f8d3cfd8df365cf8c8c9fdca218ea221d0f6e1c80756b"} Dec 02 14:32:29 crc kubenswrapper[4625]: I1202 14:32:29.294886 4625 generic.go:334] "Generic (PLEG): container finished" podID="2896c3fb-6a7b-41c8-816f-b4ee6ee231fc" containerID="6efcd3bdc0a1ef47fa1f8d3cfd8df365cf8c8c9fdca218ea221d0f6e1c80756b" exitCode=0 Dec 02 14:32:29 crc kubenswrapper[4625]: I1202 14:32:29.294979 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpzzz" event={"ID":"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc","Type":"ContainerDied","Data":"6efcd3bdc0a1ef47fa1f8d3cfd8df365cf8c8c9fdca218ea221d0f6e1c80756b"} Dec 02 14:32:31 crc kubenswrapper[4625]: I1202 14:32:31.318585 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpzzz" event={"ID":"2896c3fb-6a7b-41c8-816f-b4ee6ee231fc","Type":"ContainerStarted","Data":"6ce222187fefd76d9c740d9d484912be1677da25072d91ab8fccb287ffbd5ea8"} Dec 02 14:32:31 crc kubenswrapper[4625]: I1202 14:32:31.347836 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wpzzz" podStartSLOduration=3.511610668 podStartE2EDuration="11.347812482s" podCreationTimestamp="2025-12-02 14:32:20 +0000 UTC" firstStartedPulling="2025-12-02 14:32:22.216998723 +0000 UTC m=+2898.179175798" lastFinishedPulling="2025-12-02 14:32:30.053200527 +0000 UTC m=+2906.015377612" observedRunningTime="2025-12-02 14:32:31.337104727 +0000 UTC m=+2907.299281802" watchObservedRunningTime="2025-12-02 14:32:31.347812482 +0000 UTC m=+2907.309989557" Dec 02 14:32:36 crc kubenswrapper[4625]: I1202 14:32:36.429998 4625 generic.go:334] "Generic (PLEG): container finished" podID="c531f95a-508b-48ea-bfb7-91659bd6df10" containerID="bd99e8cfd7b27141877218d6496e238b69287c7d5af148748a586e5ac1ea70c6" exitCode=0 Dec 02 14:32:36 crc kubenswrapper[4625]: I1202 14:32:36.430171 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" event={"ID":"c531f95a-508b-48ea-bfb7-91659bd6df10","Type":"ContainerDied","Data":"bd99e8cfd7b27141877218d6496e238b69287c7d5af148748a586e5ac1ea70c6"} Dec 02 14:32:37 crc kubenswrapper[4625]: I1202 14:32:37.996682 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.052304 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-inventory\") pod \"c531f95a-508b-48ea-bfb7-91659bd6df10\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.053088 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-1\") pod \"c531f95a-508b-48ea-bfb7-91659bd6df10\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.053189 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-combined-ca-bundle\") pod \"c531f95a-508b-48ea-bfb7-91659bd6df10\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.053319 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-ssh-key\") pod \"c531f95a-508b-48ea-bfb7-91659bd6df10\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.053408 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-1\") pod \"c531f95a-508b-48ea-bfb7-91659bd6df10\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.053612 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-extra-config-0\") pod \"c531f95a-508b-48ea-bfb7-91659bd6df10\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.053706 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-0\") pod \"c531f95a-508b-48ea-bfb7-91659bd6df10\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.053842 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-0\") pod \"c531f95a-508b-48ea-bfb7-91659bd6df10\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.053981 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hx49\" (UniqueName: \"kubernetes.io/projected/c531f95a-508b-48ea-bfb7-91659bd6df10-kube-api-access-9hx49\") pod \"c531f95a-508b-48ea-bfb7-91659bd6df10\" (UID: \"c531f95a-508b-48ea-bfb7-91659bd6df10\") " Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.064620 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c531f95a-508b-48ea-bfb7-91659bd6df10-kube-api-access-9hx49" (OuterVolumeSpecName: "kube-api-access-9hx49") pod "c531f95a-508b-48ea-bfb7-91659bd6df10" (UID: "c531f95a-508b-48ea-bfb7-91659bd6df10"). InnerVolumeSpecName "kube-api-access-9hx49". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.065133 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c531f95a-508b-48ea-bfb7-91659bd6df10" (UID: "c531f95a-508b-48ea-bfb7-91659bd6df10"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.089540 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c531f95a-508b-48ea-bfb7-91659bd6df10" (UID: "c531f95a-508b-48ea-bfb7-91659bd6df10"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.096314 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c531f95a-508b-48ea-bfb7-91659bd6df10" (UID: "c531f95a-508b-48ea-bfb7-91659bd6df10"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.096697 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c531f95a-508b-48ea-bfb7-91659bd6df10" (UID: "c531f95a-508b-48ea-bfb7-91659bd6df10"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.111926 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-inventory" (OuterVolumeSpecName: "inventory") pod "c531f95a-508b-48ea-bfb7-91659bd6df10" (UID: "c531f95a-508b-48ea-bfb7-91659bd6df10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.121429 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c531f95a-508b-48ea-bfb7-91659bd6df10" (UID: "c531f95a-508b-48ea-bfb7-91659bd6df10"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.124871 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c531f95a-508b-48ea-bfb7-91659bd6df10" (UID: "c531f95a-508b-48ea-bfb7-91659bd6df10"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.126925 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c531f95a-508b-48ea-bfb7-91659bd6df10" (UID: "c531f95a-508b-48ea-bfb7-91659bd6df10"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.156652 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hx49\" (UniqueName: \"kubernetes.io/projected/c531f95a-508b-48ea-bfb7-91659bd6df10-kube-api-access-9hx49\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.156703 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.156716 4625 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.156728 4625 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.156737 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.156749 4625 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.156759 4625 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.156768 4625 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.156776 4625 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c531f95a-508b-48ea-bfb7-91659bd6df10-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.458285 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" event={"ID":"c531f95a-508b-48ea-bfb7-91659bd6df10","Type":"ContainerDied","Data":"a898838c4aa85d3c75071d6199109106cf41cd1d7107a435848dfad033c44128"} Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.458361 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a898838c4aa85d3c75071d6199109106cf41cd1d7107a435848dfad033c44128" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.458482 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qxsfq" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.594712 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5"] Dec 02 14:32:38 crc kubenswrapper[4625]: E1202 14:32:38.595231 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c531f95a-508b-48ea-bfb7-91659bd6df10" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.595246 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c531f95a-508b-48ea-bfb7-91659bd6df10" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.595468 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="c531f95a-508b-48ea-bfb7-91659bd6df10" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.598232 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.605104 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5"] Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.605452 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5hpl8" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.605484 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.605666 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.605779 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.605869 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.668802 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.670571 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvxxb\" (UniqueName: \"kubernetes.io/projected/01572dfd-9cb1-4c55-90fc-759a859f60e4-kube-api-access-dvxxb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.670859 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.670934 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.671450 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.671480 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.671557 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.774214 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.774367 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.774431 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.774757 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.774815 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvxxb\" (UniqueName: \"kubernetes.io/projected/01572dfd-9cb1-4c55-90fc-759a859f60e4-kube-api-access-dvxxb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.775635 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.775717 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.782375 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.782893 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.783277 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.786341 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.790946 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.794021 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.794640 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvxxb\" (UniqueName: \"kubernetes.io/projected/01572dfd-9cb1-4c55-90fc-759a859f60e4-kube-api-access-dvxxb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:38 crc kubenswrapper[4625]: I1202 14:32:38.939907 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:32:39 crc kubenswrapper[4625]: I1202 14:32:39.506913 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5"] Dec 02 14:32:40 crc kubenswrapper[4625]: I1202 14:32:40.486211 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" event={"ID":"01572dfd-9cb1-4c55-90fc-759a859f60e4","Type":"ContainerStarted","Data":"331eef49070c517bc89882c7b0c09dd725cbfdc54bbbe5086340996ded239c69"} Dec 02 14:32:40 crc kubenswrapper[4625]: I1202 14:32:40.884431 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:40 crc kubenswrapper[4625]: I1202 14:32:40.884496 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:40 crc kubenswrapper[4625]: I1202 14:32:40.940778 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:41 crc kubenswrapper[4625]: I1202 14:32:41.499249 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" event={"ID":"01572dfd-9cb1-4c55-90fc-759a859f60e4","Type":"ContainerStarted","Data":"ed424b5e6c332b12c6042ac073262e16bed1724a1ec0ee6216bd49c98ac53fe9"} Dec 02 14:32:41 crc kubenswrapper[4625]: I1202 14:32:41.525285 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" podStartSLOduration=2.814714768 podStartE2EDuration="3.525253729s" podCreationTimestamp="2025-12-02 14:32:38 +0000 UTC" firstStartedPulling="2025-12-02 14:32:39.508958106 +0000 UTC m=+2915.471135181" lastFinishedPulling="2025-12-02 14:32:40.219497067 +0000 UTC m=+2916.181674142" observedRunningTime="2025-12-02 14:32:41.515778276 +0000 UTC m=+2917.477955361" watchObservedRunningTime="2025-12-02 14:32:41.525253729 +0000 UTC m=+2917.487430804" Dec 02 14:32:41 crc kubenswrapper[4625]: I1202 14:32:41.556074 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wpzzz" Dec 02 14:32:41 crc kubenswrapper[4625]: I1202 14:32:41.666707 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wpzzz"] Dec 02 14:32:41 crc kubenswrapper[4625]: I1202 14:32:41.726359 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nrhd2"] Dec 02 14:32:41 crc kubenswrapper[4625]: I1202 14:32:41.726865 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nrhd2" podUID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerName="registry-server" containerID="cri-o://9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786" gracePeriod=2 Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.314016 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrhd2" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.368243 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-utilities\") pod \"460994e6-261b-4787-bed8-8b4ad1d83e3d\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.368488 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzr5w\" (UniqueName: \"kubernetes.io/projected/460994e6-261b-4787-bed8-8b4ad1d83e3d-kube-api-access-rzr5w\") pod \"460994e6-261b-4787-bed8-8b4ad1d83e3d\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.368594 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-catalog-content\") pod \"460994e6-261b-4787-bed8-8b4ad1d83e3d\" (UID: \"460994e6-261b-4787-bed8-8b4ad1d83e3d\") " Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.382199 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-utilities" (OuterVolumeSpecName: "utilities") pod "460994e6-261b-4787-bed8-8b4ad1d83e3d" (UID: "460994e6-261b-4787-bed8-8b4ad1d83e3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.395555 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/460994e6-261b-4787-bed8-8b4ad1d83e3d-kube-api-access-rzr5w" (OuterVolumeSpecName: "kube-api-access-rzr5w") pod "460994e6-261b-4787-bed8-8b4ad1d83e3d" (UID: "460994e6-261b-4787-bed8-8b4ad1d83e3d"). InnerVolumeSpecName "kube-api-access-rzr5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.465112 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "460994e6-261b-4787-bed8-8b4ad1d83e3d" (UID: "460994e6-261b-4787-bed8-8b4ad1d83e3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.471751 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzr5w\" (UniqueName: \"kubernetes.io/projected/460994e6-261b-4787-bed8-8b4ad1d83e3d-kube-api-access-rzr5w\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.471787 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.471796 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/460994e6-261b-4787-bed8-8b4ad1d83e3d-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.513597 4625 generic.go:334] "Generic (PLEG): container finished" podID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerID="9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786" exitCode=0 Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.515187 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrhd2" event={"ID":"460994e6-261b-4787-bed8-8b4ad1d83e3d","Type":"ContainerDied","Data":"9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786"} Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.515232 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrhd2" event={"ID":"460994e6-261b-4787-bed8-8b4ad1d83e3d","Type":"ContainerDied","Data":"15677c38eca0b53395aebab074bdb9e8665347c07dd0bdc14389bee1a2b2c2ef"} Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.515255 4625 scope.go:117] "RemoveContainer" containerID="9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.516541 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrhd2" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.572525 4625 scope.go:117] "RemoveContainer" containerID="a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.593974 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nrhd2"] Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.612425 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nrhd2"] Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.685125 4625 scope.go:117] "RemoveContainer" containerID="14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.748352 4625 scope.go:117] "RemoveContainer" containerID="9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786" Dec 02 14:32:42 crc kubenswrapper[4625]: E1202 14:32:42.749753 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786\": container with ID starting with 9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786 not found: ID does not exist" containerID="9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.749797 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786"} err="failed to get container status \"9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786\": rpc error: code = NotFound desc = could not find container \"9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786\": container with ID starting with 9ab2a2d03e1820520c9e3a0e3e1b083d5903db32ab013675c48b6051097c8786 not found: ID does not exist" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.749827 4625 scope.go:117] "RemoveContainer" containerID="a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d" Dec 02 14:32:42 crc kubenswrapper[4625]: E1202 14:32:42.752572 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d\": container with ID starting with a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d not found: ID does not exist" containerID="a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.752614 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d"} err="failed to get container status \"a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d\": rpc error: code = NotFound desc = could not find container \"a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d\": container with ID starting with a61e1786d4063cc2a9a3344d9da560417d511370732b111436bd7693422e442d not found: ID does not exist" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.752631 4625 scope.go:117] "RemoveContainer" containerID="14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75" Dec 02 14:32:42 crc kubenswrapper[4625]: E1202 14:32:42.753226 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75\": container with ID starting with 14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75 not found: ID does not exist" containerID="14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.753249 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75"} err="failed to get container status \"14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75\": rpc error: code = NotFound desc = could not find container \"14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75\": container with ID starting with 14748d19a3f5a71f7923afeb15a1d8c14e0d8014e6d8a66b367a8c7aa0254b75 not found: ID does not exist" Dec 02 14:32:42 crc kubenswrapper[4625]: I1202 14:32:42.879880 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="460994e6-261b-4787-bed8-8b4ad1d83e3d" path="/var/lib/kubelet/pods/460994e6-261b-4787-bed8-8b4ad1d83e3d/volumes" Dec 02 14:32:49 crc kubenswrapper[4625]: I1202 14:32:49.271570 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:32:49 crc kubenswrapper[4625]: I1202 14:32:49.272511 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:32:49 crc kubenswrapper[4625]: I1202 14:32:49.272580 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:32:49 crc kubenswrapper[4625]: I1202 14:32:49.273563 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1221f82188e7b3ced065d921a5d009af9803ddf85badbe077fcaa28988a9c41"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:32:49 crc kubenswrapper[4625]: I1202 14:32:49.273616 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://e1221f82188e7b3ced065d921a5d009af9803ddf85badbe077fcaa28988a9c41" gracePeriod=600 Dec 02 14:32:49 crc kubenswrapper[4625]: I1202 14:32:49.604561 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="e1221f82188e7b3ced065d921a5d009af9803ddf85badbe077fcaa28988a9c41" exitCode=0 Dec 02 14:32:49 crc kubenswrapper[4625]: I1202 14:32:49.604738 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"e1221f82188e7b3ced065d921a5d009af9803ddf85badbe077fcaa28988a9c41"} Dec 02 14:32:49 crc kubenswrapper[4625]: I1202 14:32:49.605053 4625 scope.go:117] "RemoveContainer" containerID="13fe6a73c3ee8d1dc59a99ddabb6b67ccc3c6aa08abdab5a776693ce0877eaee" Dec 02 14:32:50 crc kubenswrapper[4625]: I1202 14:32:50.621545 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9"} Dec 02 14:34:49 crc kubenswrapper[4625]: I1202 14:34:49.271959 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:34:49 crc kubenswrapper[4625]: I1202 14:34:49.272853 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:35:19 crc kubenswrapper[4625]: I1202 14:35:19.271548 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:35:19 crc kubenswrapper[4625]: I1202 14:35:19.272155 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:35:49 crc kubenswrapper[4625]: I1202 14:35:49.271799 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:35:49 crc kubenswrapper[4625]: I1202 14:35:49.272748 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:35:49 crc kubenswrapper[4625]: I1202 14:35:49.272859 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:35:49 crc kubenswrapper[4625]: I1202 14:35:49.274049 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:35:49 crc kubenswrapper[4625]: I1202 14:35:49.274158 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" gracePeriod=600 Dec 02 14:35:49 crc kubenswrapper[4625]: E1202 14:35:49.418079 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:35:50 crc kubenswrapper[4625]: I1202 14:35:50.151687 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" exitCode=0 Dec 02 14:35:50 crc kubenswrapper[4625]: I1202 14:35:50.151761 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9"} Dec 02 14:35:50 crc kubenswrapper[4625]: I1202 14:35:50.152244 4625 scope.go:117] "RemoveContainer" containerID="e1221f82188e7b3ced065d921a5d009af9803ddf85badbe077fcaa28988a9c41" Dec 02 14:35:50 crc kubenswrapper[4625]: I1202 14:35:50.153279 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:35:50 crc kubenswrapper[4625]: E1202 14:35:50.157449 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:36:01 crc kubenswrapper[4625]: I1202 14:36:01.856646 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:36:01 crc kubenswrapper[4625]: E1202 14:36:01.857742 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:36:04 crc kubenswrapper[4625]: E1202 14:36:04.532867 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01572dfd_9cb1_4c55_90fc_759a859f60e4.slice/crio-ed424b5e6c332b12c6042ac073262e16bed1724a1ec0ee6216bd49c98ac53fe9.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:36:05 crc kubenswrapper[4625]: I1202 14:36:05.319029 4625 generic.go:334] "Generic (PLEG): container finished" podID="01572dfd-9cb1-4c55-90fc-759a859f60e4" containerID="ed424b5e6c332b12c6042ac073262e16bed1724a1ec0ee6216bd49c98ac53fe9" exitCode=0 Dec 02 14:36:05 crc kubenswrapper[4625]: I1202 14:36:05.319087 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" event={"ID":"01572dfd-9cb1-4c55-90fc-759a859f60e4","Type":"ContainerDied","Data":"ed424b5e6c332b12c6042ac073262e16bed1724a1ec0ee6216bd49c98ac53fe9"} Dec 02 14:36:06 crc kubenswrapper[4625]: I1202 14:36:06.880113 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.045781 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-0\") pod \"01572dfd-9cb1-4c55-90fc-759a859f60e4\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.045841 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-inventory\") pod \"01572dfd-9cb1-4c55-90fc-759a859f60e4\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.045878 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-2\") pod \"01572dfd-9cb1-4c55-90fc-759a859f60e4\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.046044 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ssh-key\") pod \"01572dfd-9cb1-4c55-90fc-759a859f60e4\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.046067 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-telemetry-combined-ca-bundle\") pod \"01572dfd-9cb1-4c55-90fc-759a859f60e4\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.046171 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-1\") pod \"01572dfd-9cb1-4c55-90fc-759a859f60e4\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.046352 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvxxb\" (UniqueName: \"kubernetes.io/projected/01572dfd-9cb1-4c55-90fc-759a859f60e4-kube-api-access-dvxxb\") pod \"01572dfd-9cb1-4c55-90fc-759a859f60e4\" (UID: \"01572dfd-9cb1-4c55-90fc-759a859f60e4\") " Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.056845 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "01572dfd-9cb1-4c55-90fc-759a859f60e4" (UID: "01572dfd-9cb1-4c55-90fc-759a859f60e4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.059762 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01572dfd-9cb1-4c55-90fc-759a859f60e4-kube-api-access-dvxxb" (OuterVolumeSpecName: "kube-api-access-dvxxb") pod "01572dfd-9cb1-4c55-90fc-759a859f60e4" (UID: "01572dfd-9cb1-4c55-90fc-759a859f60e4"). InnerVolumeSpecName "kube-api-access-dvxxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.087873 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-inventory" (OuterVolumeSpecName: "inventory") pod "01572dfd-9cb1-4c55-90fc-759a859f60e4" (UID: "01572dfd-9cb1-4c55-90fc-759a859f60e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.089421 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "01572dfd-9cb1-4c55-90fc-759a859f60e4" (UID: "01572dfd-9cb1-4c55-90fc-759a859f60e4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.090518 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "01572dfd-9cb1-4c55-90fc-759a859f60e4" (UID: "01572dfd-9cb1-4c55-90fc-759a859f60e4"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.091818 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "01572dfd-9cb1-4c55-90fc-759a859f60e4" (UID: "01572dfd-9cb1-4c55-90fc-759a859f60e4"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.099376 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "01572dfd-9cb1-4c55-90fc-759a859f60e4" (UID: "01572dfd-9cb1-4c55-90fc-759a859f60e4"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.149658 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvxxb\" (UniqueName: \"kubernetes.io/projected/01572dfd-9cb1-4c55-90fc-759a859f60e4-kube-api-access-dvxxb\") on node \"crc\" DevicePath \"\"" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.149699 4625 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.149714 4625 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-inventory\") on node \"crc\" DevicePath \"\"" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.149728 4625 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.149739 4625 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.149748 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.149759 4625 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/01572dfd-9cb1-4c55-90fc-759a859f60e4-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.340300 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" event={"ID":"01572dfd-9cb1-4c55-90fc-759a859f60e4","Type":"ContainerDied","Data":"331eef49070c517bc89882c7b0c09dd725cbfdc54bbbe5086340996ded239c69"} Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.340491 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="331eef49070c517bc89882c7b0c09dd725cbfdc54bbbe5086340996ded239c69" Dec 02 14:36:07 crc kubenswrapper[4625]: I1202 14:36:07.340811 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5" Dec 02 14:36:14 crc kubenswrapper[4625]: I1202 14:36:14.890084 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:36:14 crc kubenswrapper[4625]: E1202 14:36:14.890962 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:36:26 crc kubenswrapper[4625]: I1202 14:36:26.856623 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:36:26 crc kubenswrapper[4625]: E1202 14:36:26.857633 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:36:41 crc kubenswrapper[4625]: I1202 14:36:41.856790 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:36:41 crc kubenswrapper[4625]: E1202 14:36:41.857884 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:36:55 crc kubenswrapper[4625]: I1202 14:36:55.855752 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:36:55 crc kubenswrapper[4625]: E1202 14:36:55.856944 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:37:07 crc kubenswrapper[4625]: I1202 14:37:07.856587 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:37:07 crc kubenswrapper[4625]: E1202 14:37:07.857811 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.570877 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 02 14:37:09 crc kubenswrapper[4625]: E1202 14:37:09.572060 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerName="extract-content" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.572094 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerName="extract-content" Dec 02 14:37:09 crc kubenswrapper[4625]: E1202 14:37:09.572117 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerName="extract-utilities" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.572126 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerName="extract-utilities" Dec 02 14:37:09 crc kubenswrapper[4625]: E1202 14:37:09.572149 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01572dfd-9cb1-4c55-90fc-759a859f60e4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.572159 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="01572dfd-9cb1-4c55-90fc-759a859f60e4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 02 14:37:09 crc kubenswrapper[4625]: E1202 14:37:09.572206 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerName="registry-server" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.572214 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerName="registry-server" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.572519 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="01572dfd-9cb1-4c55-90fc-759a859f60e4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.572576 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="460994e6-261b-4787-bed8-8b4ad1d83e3d" containerName="registry-server" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.573554 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.577117 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.577143 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-r2fq6" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.577262 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.577402 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.588797 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.747466 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.747544 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.747571 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.747625 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.747672 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jzpw\" (UniqueName: \"kubernetes.io/projected/f72b183c-9a68-408e-b6b0-2accb1e96305-kube-api-access-8jzpw\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.747745 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.747879 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.747918 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.747978 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-config-data\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.850443 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jzpw\" (UniqueName: \"kubernetes.io/projected/f72b183c-9a68-408e-b6b0-2accb1e96305-kube-api-access-8jzpw\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.850541 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.850626 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.850680 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.850703 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-config-data\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.850770 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.850816 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.850838 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.851634 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.852486 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.852821 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.853208 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.853407 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-config-data\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.854324 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.859448 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.867150 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.868162 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.875653 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jzpw\" (UniqueName: \"kubernetes.io/projected/f72b183c-9a68-408e-b6b0-2accb1e96305-kube-api-access-8jzpw\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.892555 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " pod="openstack/tempest-tests-tempest" Dec 02 14:37:09 crc kubenswrapper[4625]: I1202 14:37:09.905273 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 14:37:10 crc kubenswrapper[4625]: I1202 14:37:10.385873 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 02 14:37:10 crc kubenswrapper[4625]: I1202 14:37:10.395366 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:37:10 crc kubenswrapper[4625]: I1202 14:37:10.996039 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f72b183c-9a68-408e-b6b0-2accb1e96305","Type":"ContainerStarted","Data":"5d4ebee3f70668e25a5fd582dbe24777a736c4a0892acfc7f34fa1d03306cfa3"} Dec 02 14:37:18 crc kubenswrapper[4625]: I1202 14:37:18.858403 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:37:18 crc kubenswrapper[4625]: E1202 14:37:18.859638 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:37:32 crc kubenswrapper[4625]: I1202 14:37:32.857713 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:37:32 crc kubenswrapper[4625]: E1202 14:37:32.859113 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:37:46 crc kubenswrapper[4625]: I1202 14:37:46.856144 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:37:46 crc kubenswrapper[4625]: E1202 14:37:46.857292 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:37:49 crc kubenswrapper[4625]: E1202 14:37:49.938836 4625 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 02 14:37:49 crc kubenswrapper[4625]: E1202 14:37:49.943889 4625 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jzpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(f72b183c-9a68-408e-b6b0-2accb1e96305): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 02 14:37:49 crc kubenswrapper[4625]: E1202 14:37:49.945102 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="f72b183c-9a68-408e-b6b0-2accb1e96305" Dec 02 14:37:50 crc kubenswrapper[4625]: E1202 14:37:50.479193 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="f72b183c-9a68-408e-b6b0-2accb1e96305" Dec 02 14:38:00 crc kubenswrapper[4625]: I1202 14:38:00.858999 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:38:00 crc kubenswrapper[4625]: E1202 14:38:00.860261 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:38:06 crc kubenswrapper[4625]: I1202 14:38:06.415236 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 02 14:38:08 crc kubenswrapper[4625]: I1202 14:38:08.688025 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f72b183c-9a68-408e-b6b0-2accb1e96305","Type":"ContainerStarted","Data":"60529a8d3127c5f558cfac5d5bbf30b26305d57752fe5bfbc2714b6824a34fd8"} Dec 02 14:38:08 crc kubenswrapper[4625]: I1202 14:38:08.720183 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.703267208 podStartE2EDuration="1m0.72015112s" podCreationTimestamp="2025-12-02 14:37:08 +0000 UTC" firstStartedPulling="2025-12-02 14:37:10.394876919 +0000 UTC m=+3186.357053994" lastFinishedPulling="2025-12-02 14:38:06.411760831 +0000 UTC m=+3242.373937906" observedRunningTime="2025-12-02 14:38:08.715633839 +0000 UTC m=+3244.677810914" watchObservedRunningTime="2025-12-02 14:38:08.72015112 +0000 UTC m=+3244.682328195" Dec 02 14:38:14 crc kubenswrapper[4625]: I1202 14:38:14.865838 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:38:14 crc kubenswrapper[4625]: E1202 14:38:14.867085 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:38:25 crc kubenswrapper[4625]: I1202 14:38:25.856420 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:38:25 crc kubenswrapper[4625]: E1202 14:38:25.857628 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:38:40 crc kubenswrapper[4625]: I1202 14:38:40.857419 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:38:40 crc kubenswrapper[4625]: E1202 14:38:40.858910 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:38:51 crc kubenswrapper[4625]: I1202 14:38:51.856671 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:38:51 crc kubenswrapper[4625]: E1202 14:38:51.857986 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:39:04 crc kubenswrapper[4625]: I1202 14:39:04.866087 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:39:04 crc kubenswrapper[4625]: E1202 14:39:04.867102 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:39:16 crc kubenswrapper[4625]: I1202 14:39:16.856386 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:39:16 crc kubenswrapper[4625]: E1202 14:39:16.857150 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:39:31 crc kubenswrapper[4625]: I1202 14:39:31.878491 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:39:31 crc kubenswrapper[4625]: E1202 14:39:31.879855 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:39:45 crc kubenswrapper[4625]: I1202 14:39:45.857835 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:39:45 crc kubenswrapper[4625]: E1202 14:39:45.859220 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:39:57 crc kubenswrapper[4625]: I1202 14:39:57.857003 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:39:57 crc kubenswrapper[4625]: E1202 14:39:57.858458 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:40:09 crc kubenswrapper[4625]: I1202 14:40:09.856570 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:40:09 crc kubenswrapper[4625]: E1202 14:40:09.857802 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:40:20 crc kubenswrapper[4625]: I1202 14:40:20.856503 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:40:20 crc kubenswrapper[4625]: E1202 14:40:20.858828 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:40:32 crc kubenswrapper[4625]: I1202 14:40:32.857388 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:40:32 crc kubenswrapper[4625]: E1202 14:40:32.858177 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:40:45 crc kubenswrapper[4625]: I1202 14:40:45.856357 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:40:45 crc kubenswrapper[4625]: E1202 14:40:45.857245 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:41:00 crc kubenswrapper[4625]: I1202 14:41:00.857012 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:41:01 crc kubenswrapper[4625]: I1202 14:41:01.918166 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"cf7a702099ca825d0bc85749e0559971a5464c9093309f3754d907b473e2f654"} Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.655792 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p2kkd"] Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.659431 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.673413 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2kkd"] Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.789888 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-catalog-content\") pod \"certified-operators-p2kkd\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.790029 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxg7\" (UniqueName: \"kubernetes.io/projected/817f0a0b-a22d-4384-8801-5caf263f955e-kube-api-access-4nxg7\") pod \"certified-operators-p2kkd\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.790291 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-utilities\") pod \"certified-operators-p2kkd\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.895353 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-utilities\") pod \"certified-operators-p2kkd\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.896028 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-catalog-content\") pod \"certified-operators-p2kkd\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.896115 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxg7\" (UniqueName: \"kubernetes.io/projected/817f0a0b-a22d-4384-8801-5caf263f955e-kube-api-access-4nxg7\") pod \"certified-operators-p2kkd\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.896179 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-utilities\") pod \"certified-operators-p2kkd\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.896483 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-catalog-content\") pod \"certified-operators-p2kkd\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.941303 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxg7\" (UniqueName: \"kubernetes.io/projected/817f0a0b-a22d-4384-8801-5caf263f955e-kube-api-access-4nxg7\") pod \"certified-operators-p2kkd\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:30 crc kubenswrapper[4625]: I1202 14:41:30.987557 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:32 crc kubenswrapper[4625]: I1202 14:41:32.088220 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2kkd"] Dec 02 14:41:32 crc kubenswrapper[4625]: I1202 14:41:32.242550 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2kkd" event={"ID":"817f0a0b-a22d-4384-8801-5caf263f955e","Type":"ContainerStarted","Data":"0aece091bd2ba1d646c8607fc9b2f5a4bf5478017fab761be10de89acfeeb1b8"} Dec 02 14:41:33 crc kubenswrapper[4625]: I1202 14:41:33.257295 4625 generic.go:334] "Generic (PLEG): container finished" podID="817f0a0b-a22d-4384-8801-5caf263f955e" containerID="5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d" exitCode=0 Dec 02 14:41:33 crc kubenswrapper[4625]: I1202 14:41:33.257409 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2kkd" event={"ID":"817f0a0b-a22d-4384-8801-5caf263f955e","Type":"ContainerDied","Data":"5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d"} Dec 02 14:41:35 crc kubenswrapper[4625]: I1202 14:41:35.297629 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2kkd" event={"ID":"817f0a0b-a22d-4384-8801-5caf263f955e","Type":"ContainerStarted","Data":"3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186"} Dec 02 14:41:36 crc kubenswrapper[4625]: I1202 14:41:36.313226 4625 generic.go:334] "Generic (PLEG): container finished" podID="817f0a0b-a22d-4384-8801-5caf263f955e" containerID="3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186" exitCode=0 Dec 02 14:41:36 crc kubenswrapper[4625]: I1202 14:41:36.313298 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2kkd" event={"ID":"817f0a0b-a22d-4384-8801-5caf263f955e","Type":"ContainerDied","Data":"3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186"} Dec 02 14:41:37 crc kubenswrapper[4625]: I1202 14:41:37.327834 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2kkd" event={"ID":"817f0a0b-a22d-4384-8801-5caf263f955e","Type":"ContainerStarted","Data":"533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0"} Dec 02 14:41:37 crc kubenswrapper[4625]: I1202 14:41:37.355829 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p2kkd" podStartSLOduration=3.69605728 podStartE2EDuration="7.355782324s" podCreationTimestamp="2025-12-02 14:41:30 +0000 UTC" firstStartedPulling="2025-12-02 14:41:33.261328099 +0000 UTC m=+3449.223505174" lastFinishedPulling="2025-12-02 14:41:36.921053143 +0000 UTC m=+3452.883230218" observedRunningTime="2025-12-02 14:41:37.352052424 +0000 UTC m=+3453.314229499" watchObservedRunningTime="2025-12-02 14:41:37.355782324 +0000 UTC m=+3453.317959399" Dec 02 14:41:40 crc kubenswrapper[4625]: I1202 14:41:40.988689 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:40 crc kubenswrapper[4625]: I1202 14:41:40.990555 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:42 crc kubenswrapper[4625]: I1202 14:41:42.044081 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p2kkd" podUID="817f0a0b-a22d-4384-8801-5caf263f955e" containerName="registry-server" probeResult="failure" output=< Dec 02 14:41:42 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 14:41:42 crc kubenswrapper[4625]: > Dec 02 14:41:51 crc kubenswrapper[4625]: I1202 14:41:51.054541 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:51 crc kubenswrapper[4625]: I1202 14:41:51.172097 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:51 crc kubenswrapper[4625]: I1202 14:41:51.355056 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2kkd"] Dec 02 14:41:52 crc kubenswrapper[4625]: I1202 14:41:52.496720 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p2kkd" podUID="817f0a0b-a22d-4384-8801-5caf263f955e" containerName="registry-server" containerID="cri-o://533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0" gracePeriod=2 Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.239344 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.245685 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nxg7\" (UniqueName: \"kubernetes.io/projected/817f0a0b-a22d-4384-8801-5caf263f955e-kube-api-access-4nxg7\") pod \"817f0a0b-a22d-4384-8801-5caf263f955e\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.245892 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-catalog-content\") pod \"817f0a0b-a22d-4384-8801-5caf263f955e\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.246113 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-utilities\") pod \"817f0a0b-a22d-4384-8801-5caf263f955e\" (UID: \"817f0a0b-a22d-4384-8801-5caf263f955e\") " Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.246692 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-utilities" (OuterVolumeSpecName: "utilities") pod "817f0a0b-a22d-4384-8801-5caf263f955e" (UID: "817f0a0b-a22d-4384-8801-5caf263f955e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.255643 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817f0a0b-a22d-4384-8801-5caf263f955e-kube-api-access-4nxg7" (OuterVolumeSpecName: "kube-api-access-4nxg7") pod "817f0a0b-a22d-4384-8801-5caf263f955e" (UID: "817f0a0b-a22d-4384-8801-5caf263f955e"). InnerVolumeSpecName "kube-api-access-4nxg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.309380 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "817f0a0b-a22d-4384-8801-5caf263f955e" (UID: "817f0a0b-a22d-4384-8801-5caf263f955e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.353448 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.353503 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nxg7\" (UniqueName: \"kubernetes.io/projected/817f0a0b-a22d-4384-8801-5caf263f955e-kube-api-access-4nxg7\") on node \"crc\" DevicePath \"\"" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.353517 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/817f0a0b-a22d-4384-8801-5caf263f955e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.509682 4625 generic.go:334] "Generic (PLEG): container finished" podID="817f0a0b-a22d-4384-8801-5caf263f955e" containerID="533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0" exitCode=0 Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.509763 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2kkd" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.510744 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2kkd" event={"ID":"817f0a0b-a22d-4384-8801-5caf263f955e","Type":"ContainerDied","Data":"533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0"} Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.510909 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2kkd" event={"ID":"817f0a0b-a22d-4384-8801-5caf263f955e","Type":"ContainerDied","Data":"0aece091bd2ba1d646c8607fc9b2f5a4bf5478017fab761be10de89acfeeb1b8"} Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.511023 4625 scope.go:117] "RemoveContainer" containerID="533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.564634 4625 scope.go:117] "RemoveContainer" containerID="3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.565386 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2kkd"] Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.577989 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p2kkd"] Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.612213 4625 scope.go:117] "RemoveContainer" containerID="5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.647841 4625 scope.go:117] "RemoveContainer" containerID="533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0" Dec 02 14:41:53 crc kubenswrapper[4625]: E1202 14:41:53.648675 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0\": container with ID starting with 533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0 not found: ID does not exist" containerID="533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.648743 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0"} err="failed to get container status \"533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0\": rpc error: code = NotFound desc = could not find container \"533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0\": container with ID starting with 533f7c496293fa1a36f975d3258ac0f266d2363b751647b4c4f0c39b7bda5dd0 not found: ID does not exist" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.648788 4625 scope.go:117] "RemoveContainer" containerID="3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186" Dec 02 14:41:53 crc kubenswrapper[4625]: E1202 14:41:53.649246 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186\": container with ID starting with 3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186 not found: ID does not exist" containerID="3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.649279 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186"} err="failed to get container status \"3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186\": rpc error: code = NotFound desc = could not find container \"3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186\": container with ID starting with 3bc7517315e45b139c22e145d3e5d724827d7930e589de5b45eccdbabfee8186 not found: ID does not exist" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.649293 4625 scope.go:117] "RemoveContainer" containerID="5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d" Dec 02 14:41:53 crc kubenswrapper[4625]: E1202 14:41:53.649760 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d\": container with ID starting with 5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d not found: ID does not exist" containerID="5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d" Dec 02 14:41:53 crc kubenswrapper[4625]: I1202 14:41:53.649778 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d"} err="failed to get container status \"5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d\": rpc error: code = NotFound desc = could not find container \"5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d\": container with ID starting with 5f05bdfdf3c5d3e4983c373dc1fc16b4d75896b5ba77eb50c58236a41a1c9d5d not found: ID does not exist" Dec 02 14:41:54 crc kubenswrapper[4625]: I1202 14:41:54.873246 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817f0a0b-a22d-4384-8801-5caf263f955e" path="/var/lib/kubelet/pods/817f0a0b-a22d-4384-8801-5caf263f955e/volumes" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.323096 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zfdlc"] Dec 02 14:43:02 crc kubenswrapper[4625]: E1202 14:43:02.324603 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817f0a0b-a22d-4384-8801-5caf263f955e" containerName="extract-content" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.324641 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="817f0a0b-a22d-4384-8801-5caf263f955e" containerName="extract-content" Dec 02 14:43:02 crc kubenswrapper[4625]: E1202 14:43:02.324679 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817f0a0b-a22d-4384-8801-5caf263f955e" containerName="registry-server" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.324687 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="817f0a0b-a22d-4384-8801-5caf263f955e" containerName="registry-server" Dec 02 14:43:02 crc kubenswrapper[4625]: E1202 14:43:02.324716 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817f0a0b-a22d-4384-8801-5caf263f955e" containerName="extract-utilities" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.324724 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="817f0a0b-a22d-4384-8801-5caf263f955e" containerName="extract-utilities" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.325018 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="817f0a0b-a22d-4384-8801-5caf263f955e" containerName="registry-server" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.330860 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.333766 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zfdlc"] Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.412002 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-catalog-content\") pod \"community-operators-zfdlc\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.412713 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd7pj\" (UniqueName: \"kubernetes.io/projected/fac3bee3-665d-44d3-95e3-6cc88d576e43-kube-api-access-gd7pj\") pod \"community-operators-zfdlc\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.412792 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-utilities\") pod \"community-operators-zfdlc\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.515125 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-utilities\") pod \"community-operators-zfdlc\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.515290 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-catalog-content\") pod \"community-operators-zfdlc\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.515467 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd7pj\" (UniqueName: \"kubernetes.io/projected/fac3bee3-665d-44d3-95e3-6cc88d576e43-kube-api-access-gd7pj\") pod \"community-operators-zfdlc\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.515832 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-catalog-content\") pod \"community-operators-zfdlc\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.516168 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-utilities\") pod \"community-operators-zfdlc\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.541381 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd7pj\" (UniqueName: \"kubernetes.io/projected/fac3bee3-665d-44d3-95e3-6cc88d576e43-kube-api-access-gd7pj\") pod \"community-operators-zfdlc\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:02 crc kubenswrapper[4625]: I1202 14:43:02.659630 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:03 crc kubenswrapper[4625]: I1202 14:43:03.323299 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zfdlc"] Dec 02 14:43:03 crc kubenswrapper[4625]: W1202 14:43:03.333402 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfac3bee3_665d_44d3_95e3_6cc88d576e43.slice/crio-e67395ec130f7b8f769fe51599d1cb961a6436363a2b3c806d37aa95fc316b7b WatchSource:0}: Error finding container e67395ec130f7b8f769fe51599d1cb961a6436363a2b3c806d37aa95fc316b7b: Status 404 returned error can't find the container with id e67395ec130f7b8f769fe51599d1cb961a6436363a2b3c806d37aa95fc316b7b Dec 02 14:43:04 crc kubenswrapper[4625]: I1202 14:43:04.323510 4625 generic.go:334] "Generic (PLEG): container finished" podID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerID="4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592" exitCode=0 Dec 02 14:43:04 crc kubenswrapper[4625]: I1202 14:43:04.323615 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfdlc" event={"ID":"fac3bee3-665d-44d3-95e3-6cc88d576e43","Type":"ContainerDied","Data":"4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592"} Dec 02 14:43:04 crc kubenswrapper[4625]: I1202 14:43:04.323826 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfdlc" event={"ID":"fac3bee3-665d-44d3-95e3-6cc88d576e43","Type":"ContainerStarted","Data":"e67395ec130f7b8f769fe51599d1cb961a6436363a2b3c806d37aa95fc316b7b"} Dec 02 14:43:04 crc kubenswrapper[4625]: I1202 14:43:04.327648 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:43:06 crc kubenswrapper[4625]: I1202 14:43:06.348843 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfdlc" event={"ID":"fac3bee3-665d-44d3-95e3-6cc88d576e43","Type":"ContainerStarted","Data":"892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b"} Dec 02 14:43:07 crc kubenswrapper[4625]: I1202 14:43:07.360929 4625 generic.go:334] "Generic (PLEG): container finished" podID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerID="892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b" exitCode=0 Dec 02 14:43:07 crc kubenswrapper[4625]: I1202 14:43:07.361022 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfdlc" event={"ID":"fac3bee3-665d-44d3-95e3-6cc88d576e43","Type":"ContainerDied","Data":"892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b"} Dec 02 14:43:08 crc kubenswrapper[4625]: I1202 14:43:08.376859 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfdlc" event={"ID":"fac3bee3-665d-44d3-95e3-6cc88d576e43","Type":"ContainerStarted","Data":"eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7"} Dec 02 14:43:08 crc kubenswrapper[4625]: I1202 14:43:08.409194 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zfdlc" podStartSLOduration=2.7240669410000002 podStartE2EDuration="6.409165645s" podCreationTimestamp="2025-12-02 14:43:02 +0000 UTC" firstStartedPulling="2025-12-02 14:43:04.327356091 +0000 UTC m=+3540.289533166" lastFinishedPulling="2025-12-02 14:43:08.012454795 +0000 UTC m=+3543.974631870" observedRunningTime="2025-12-02 14:43:08.39933127 +0000 UTC m=+3544.361508365" watchObservedRunningTime="2025-12-02 14:43:08.409165645 +0000 UTC m=+3544.371342720" Dec 02 14:43:12 crc kubenswrapper[4625]: I1202 14:43:12.660244 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:12 crc kubenswrapper[4625]: I1202 14:43:12.661270 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:13 crc kubenswrapper[4625]: I1202 14:43:13.737713 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zfdlc" podUID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerName="registry-server" probeResult="failure" output=< Dec 02 14:43:13 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 14:43:13 crc kubenswrapper[4625]: > Dec 02 14:43:19 crc kubenswrapper[4625]: I1202 14:43:19.271220 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:43:19 crc kubenswrapper[4625]: I1202 14:43:19.272012 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:43:22 crc kubenswrapper[4625]: I1202 14:43:22.711649 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:22 crc kubenswrapper[4625]: I1202 14:43:22.776578 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:22 crc kubenswrapper[4625]: I1202 14:43:22.979824 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zfdlc"] Dec 02 14:43:24 crc kubenswrapper[4625]: I1202 14:43:24.612505 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zfdlc" podUID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerName="registry-server" containerID="cri-o://eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7" gracePeriod=2 Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.455444 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.560909 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd7pj\" (UniqueName: \"kubernetes.io/projected/fac3bee3-665d-44d3-95e3-6cc88d576e43-kube-api-access-gd7pj\") pod \"fac3bee3-665d-44d3-95e3-6cc88d576e43\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.561119 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-catalog-content\") pod \"fac3bee3-665d-44d3-95e3-6cc88d576e43\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.561325 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-utilities\") pod \"fac3bee3-665d-44d3-95e3-6cc88d576e43\" (UID: \"fac3bee3-665d-44d3-95e3-6cc88d576e43\") " Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.562324 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-utilities" (OuterVolumeSpecName: "utilities") pod "fac3bee3-665d-44d3-95e3-6cc88d576e43" (UID: "fac3bee3-665d-44d3-95e3-6cc88d576e43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.573027 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac3bee3-665d-44d3-95e3-6cc88d576e43-kube-api-access-gd7pj" (OuterVolumeSpecName: "kube-api-access-gd7pj") pod "fac3bee3-665d-44d3-95e3-6cc88d576e43" (UID: "fac3bee3-665d-44d3-95e3-6cc88d576e43"). InnerVolumeSpecName "kube-api-access-gd7pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.635656 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fac3bee3-665d-44d3-95e3-6cc88d576e43" (UID: "fac3bee3-665d-44d3-95e3-6cc88d576e43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.637683 4625 generic.go:334] "Generic (PLEG): container finished" podID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerID="eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7" exitCode=0 Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.637760 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfdlc" event={"ID":"fac3bee3-665d-44d3-95e3-6cc88d576e43","Type":"ContainerDied","Data":"eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7"} Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.637804 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfdlc" event={"ID":"fac3bee3-665d-44d3-95e3-6cc88d576e43","Type":"ContainerDied","Data":"e67395ec130f7b8f769fe51599d1cb961a6436363a2b3c806d37aa95fc316b7b"} Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.637834 4625 scope.go:117] "RemoveContainer" containerID="eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.637870 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfdlc" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.664095 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.664552 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd7pj\" (UniqueName: \"kubernetes.io/projected/fac3bee3-665d-44d3-95e3-6cc88d576e43-kube-api-access-gd7pj\") on node \"crc\" DevicePath \"\"" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.664564 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fac3bee3-665d-44d3-95e3-6cc88d576e43-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.698607 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zfdlc"] Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.714398 4625 scope.go:117] "RemoveContainer" containerID="892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.718643 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zfdlc"] Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.792866 4625 scope.go:117] "RemoveContainer" containerID="4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.826083 4625 scope.go:117] "RemoveContainer" containerID="eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7" Dec 02 14:43:25 crc kubenswrapper[4625]: E1202 14:43:25.826873 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7\": container with ID starting with eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7 not found: ID does not exist" containerID="eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.826935 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7"} err="failed to get container status \"eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7\": rpc error: code = NotFound desc = could not find container \"eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7\": container with ID starting with eb2b4335e28220ebc3a50c0d831c95f7d40f3e6e46efae00d92c14e00d7cb6f7 not found: ID does not exist" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.826966 4625 scope.go:117] "RemoveContainer" containerID="892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b" Dec 02 14:43:25 crc kubenswrapper[4625]: E1202 14:43:25.827364 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b\": container with ID starting with 892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b not found: ID does not exist" containerID="892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.827425 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b"} err="failed to get container status \"892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b\": rpc error: code = NotFound desc = could not find container \"892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b\": container with ID starting with 892c0289ce79de44f740391f5dd92ce2d4be51a0467cd8d2fb081c5e25a1ce6b not found: ID does not exist" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.827503 4625 scope.go:117] "RemoveContainer" containerID="4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592" Dec 02 14:43:25 crc kubenswrapper[4625]: E1202 14:43:25.827949 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592\": container with ID starting with 4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592 not found: ID does not exist" containerID="4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592" Dec 02 14:43:25 crc kubenswrapper[4625]: I1202 14:43:25.827985 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592"} err="failed to get container status \"4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592\": rpc error: code = NotFound desc = could not find container \"4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592\": container with ID starting with 4007ea5c5a1231d6785bce6de00756d4033554f99bb3ef8bbd3cb0ded4768592 not found: ID does not exist" Dec 02 14:43:26 crc kubenswrapper[4625]: I1202 14:43:26.870467 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac3bee3-665d-44d3-95e3-6cc88d576e43" path="/var/lib/kubelet/pods/fac3bee3-665d-44d3-95e3-6cc88d576e43/volumes" Dec 02 14:43:49 crc kubenswrapper[4625]: I1202 14:43:49.272005 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:43:49 crc kubenswrapper[4625]: I1202 14:43:49.272786 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:44:19 crc kubenswrapper[4625]: I1202 14:44:19.271930 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:44:19 crc kubenswrapper[4625]: I1202 14:44:19.272913 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:44:19 crc kubenswrapper[4625]: I1202 14:44:19.272985 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:44:19 crc kubenswrapper[4625]: I1202 14:44:19.274058 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf7a702099ca825d0bc85749e0559971a5464c9093309f3754d907b473e2f654"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:44:19 crc kubenswrapper[4625]: I1202 14:44:19.274126 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://cf7a702099ca825d0bc85749e0559971a5464c9093309f3754d907b473e2f654" gracePeriod=600 Dec 02 14:44:20 crc kubenswrapper[4625]: I1202 14:44:20.227652 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="cf7a702099ca825d0bc85749e0559971a5464c9093309f3754d907b473e2f654" exitCode=0 Dec 02 14:44:20 crc kubenswrapper[4625]: I1202 14:44:20.227773 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"cf7a702099ca825d0bc85749e0559971a5464c9093309f3754d907b473e2f654"} Dec 02 14:44:20 crc kubenswrapper[4625]: I1202 14:44:20.229269 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d"} Dec 02 14:44:20 crc kubenswrapper[4625]: I1202 14:44:20.229334 4625 scope.go:117] "RemoveContainer" containerID="272b599e2325251a0d21ebacafd55b84b37acce168cff669f6b189cc09d2acd9" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.179989 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj"] Dec 02 14:45:00 crc kubenswrapper[4625]: E1202 14:45:00.181297 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerName="registry-server" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.181335 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerName="registry-server" Dec 02 14:45:00 crc kubenswrapper[4625]: E1202 14:45:00.181398 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerName="extract-content" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.181407 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerName="extract-content" Dec 02 14:45:00 crc kubenswrapper[4625]: E1202 14:45:00.181428 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerName="extract-utilities" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.181438 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerName="extract-utilities" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.181726 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac3bee3-665d-44d3-95e3-6cc88d576e43" containerName="registry-server" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.182821 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.187411 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.206601 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.210549 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj"] Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.289735 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zzvt\" (UniqueName: \"kubernetes.io/projected/961e4194-42c6-4bda-9ec5-6122c2e2ace6-kube-api-access-8zzvt\") pod \"collect-profiles-29411445-vw4nj\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.289845 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/961e4194-42c6-4bda-9ec5-6122c2e2ace6-config-volume\") pod \"collect-profiles-29411445-vw4nj\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.290400 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/961e4194-42c6-4bda-9ec5-6122c2e2ace6-secret-volume\") pod \"collect-profiles-29411445-vw4nj\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.392601 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/961e4194-42c6-4bda-9ec5-6122c2e2ace6-secret-volume\") pod \"collect-profiles-29411445-vw4nj\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.392796 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zzvt\" (UniqueName: \"kubernetes.io/projected/961e4194-42c6-4bda-9ec5-6122c2e2ace6-kube-api-access-8zzvt\") pod \"collect-profiles-29411445-vw4nj\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.392845 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/961e4194-42c6-4bda-9ec5-6122c2e2ace6-config-volume\") pod \"collect-profiles-29411445-vw4nj\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.394016 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/961e4194-42c6-4bda-9ec5-6122c2e2ace6-config-volume\") pod \"collect-profiles-29411445-vw4nj\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.406005 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/961e4194-42c6-4bda-9ec5-6122c2e2ace6-secret-volume\") pod \"collect-profiles-29411445-vw4nj\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.421713 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zzvt\" (UniqueName: \"kubernetes.io/projected/961e4194-42c6-4bda-9ec5-6122c2e2ace6-kube-api-access-8zzvt\") pod \"collect-profiles-29411445-vw4nj\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:00 crc kubenswrapper[4625]: I1202 14:45:00.523605 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:01 crc kubenswrapper[4625]: I1202 14:45:01.101045 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj"] Dec 02 14:45:01 crc kubenswrapper[4625]: I1202 14:45:01.701263 4625 generic.go:334] "Generic (PLEG): container finished" podID="961e4194-42c6-4bda-9ec5-6122c2e2ace6" containerID="7f7204a0902fbba7df164fa33b72db0fe1ead8d272298d4f261a8997fdb1506d" exitCode=0 Dec 02 14:45:01 crc kubenswrapper[4625]: I1202 14:45:01.701529 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" event={"ID":"961e4194-42c6-4bda-9ec5-6122c2e2ace6","Type":"ContainerDied","Data":"7f7204a0902fbba7df164fa33b72db0fe1ead8d272298d4f261a8997fdb1506d"} Dec 02 14:45:01 crc kubenswrapper[4625]: I1202 14:45:01.701559 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" event={"ID":"961e4194-42c6-4bda-9ec5-6122c2e2ace6","Type":"ContainerStarted","Data":"34a2f86ccc133dc4997f0a6bfc5a574fb9f4c200089a6c87444df3d21e329ecb"} Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.345876 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.449986 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/961e4194-42c6-4bda-9ec5-6122c2e2ace6-secret-volume\") pod \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.450444 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zzvt\" (UniqueName: \"kubernetes.io/projected/961e4194-42c6-4bda-9ec5-6122c2e2ace6-kube-api-access-8zzvt\") pod \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.450535 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/961e4194-42c6-4bda-9ec5-6122c2e2ace6-config-volume\") pod \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\" (UID: \"961e4194-42c6-4bda-9ec5-6122c2e2ace6\") " Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.451694 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/961e4194-42c6-4bda-9ec5-6122c2e2ace6-config-volume" (OuterVolumeSpecName: "config-volume") pod "961e4194-42c6-4bda-9ec5-6122c2e2ace6" (UID: "961e4194-42c6-4bda-9ec5-6122c2e2ace6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.461603 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/961e4194-42c6-4bda-9ec5-6122c2e2ace6-kube-api-access-8zzvt" (OuterVolumeSpecName: "kube-api-access-8zzvt") pod "961e4194-42c6-4bda-9ec5-6122c2e2ace6" (UID: "961e4194-42c6-4bda-9ec5-6122c2e2ace6"). InnerVolumeSpecName "kube-api-access-8zzvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.461872 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961e4194-42c6-4bda-9ec5-6122c2e2ace6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "961e4194-42c6-4bda-9ec5-6122c2e2ace6" (UID: "961e4194-42c6-4bda-9ec5-6122c2e2ace6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.553013 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zzvt\" (UniqueName: \"kubernetes.io/projected/961e4194-42c6-4bda-9ec5-6122c2e2ace6-kube-api-access-8zzvt\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.553611 4625 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/961e4194-42c6-4bda-9ec5-6122c2e2ace6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.553693 4625 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/961e4194-42c6-4bda-9ec5-6122c2e2ace6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.730018 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" event={"ID":"961e4194-42c6-4bda-9ec5-6122c2e2ace6","Type":"ContainerDied","Data":"34a2f86ccc133dc4997f0a6bfc5a574fb9f4c200089a6c87444df3d21e329ecb"} Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.730141 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411445-vw4nj" Dec 02 14:45:03 crc kubenswrapper[4625]: I1202 14:45:03.730064 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34a2f86ccc133dc4997f0a6bfc5a574fb9f4c200089a6c87444df3d21e329ecb" Dec 02 14:45:04 crc kubenswrapper[4625]: I1202 14:45:04.454621 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t"] Dec 02 14:45:04 crc kubenswrapper[4625]: I1202 14:45:04.470741 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411400-9j88t"] Dec 02 14:45:04 crc kubenswrapper[4625]: I1202 14:45:04.873552 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17bcfde-3d1b-407e-83f8-b9a9640c7108" path="/var/lib/kubelet/pods/c17bcfde-3d1b-407e-83f8-b9a9640c7108/volumes" Dec 02 14:45:37 crc kubenswrapper[4625]: I1202 14:45:37.863520 4625 scope.go:117] "RemoveContainer" containerID="0a043ad81bd0450cf2f39c00cb75de4dda57ab38f3fb62ffaa4951a2760e67e7" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.101462 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rd9zj"] Dec 02 14:46:14 crc kubenswrapper[4625]: E1202 14:46:14.105400 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961e4194-42c6-4bda-9ec5-6122c2e2ace6" containerName="collect-profiles" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.105415 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="961e4194-42c6-4bda-9ec5-6122c2e2ace6" containerName="collect-profiles" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.105651 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="961e4194-42c6-4bda-9ec5-6122c2e2ace6" containerName="collect-profiles" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.107365 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.127534 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rd9zj"] Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.198898 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-utilities\") pod \"redhat-marketplace-rd9zj\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.199629 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-catalog-content\") pod \"redhat-marketplace-rd9zj\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.199769 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26jb7\" (UniqueName: \"kubernetes.io/projected/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-kube-api-access-26jb7\") pod \"redhat-marketplace-rd9zj\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.295094 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qsd27"] Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.297854 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.302983 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-catalog-content\") pod \"redhat-marketplace-rd9zj\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.303062 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26jb7\" (UniqueName: \"kubernetes.io/projected/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-kube-api-access-26jb7\") pod \"redhat-marketplace-rd9zj\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.303221 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-utilities\") pod \"redhat-marketplace-rd9zj\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.304068 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-utilities\") pod \"redhat-marketplace-rd9zj\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.305803 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-catalog-content\") pod \"redhat-marketplace-rd9zj\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.316334 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsd27"] Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.332933 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26jb7\" (UniqueName: \"kubernetes.io/projected/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-kube-api-access-26jb7\") pod \"redhat-marketplace-rd9zj\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.405824 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-catalog-content\") pod \"redhat-operators-qsd27\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.406253 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xjnt\" (UniqueName: \"kubernetes.io/projected/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-kube-api-access-9xjnt\") pod \"redhat-operators-qsd27\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.406427 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-utilities\") pod \"redhat-operators-qsd27\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.436010 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.509273 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-catalog-content\") pod \"redhat-operators-qsd27\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.509350 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xjnt\" (UniqueName: \"kubernetes.io/projected/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-kube-api-access-9xjnt\") pod \"redhat-operators-qsd27\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.509403 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-utilities\") pod \"redhat-operators-qsd27\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.510099 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-utilities\") pod \"redhat-operators-qsd27\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.510196 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-catalog-content\") pod \"redhat-operators-qsd27\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.533220 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xjnt\" (UniqueName: \"kubernetes.io/projected/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-kube-api-access-9xjnt\") pod \"redhat-operators-qsd27\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:14 crc kubenswrapper[4625]: I1202 14:46:14.628120 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:15 crc kubenswrapper[4625]: I1202 14:46:15.148031 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rd9zj"] Dec 02 14:46:15 crc kubenswrapper[4625]: I1202 14:46:15.275355 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qsd27"] Dec 02 14:46:15 crc kubenswrapper[4625]: I1202 14:46:15.638846 4625 generic.go:334] "Generic (PLEG): container finished" podID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerID="a989ac78a33c5779c7ed5a1e912654342db70ef0158acea16621ef860b6153f7" exitCode=0 Dec 02 14:46:15 crc kubenswrapper[4625]: I1202 14:46:15.639451 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsd27" event={"ID":"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf","Type":"ContainerDied","Data":"a989ac78a33c5779c7ed5a1e912654342db70ef0158acea16621ef860b6153f7"} Dec 02 14:46:15 crc kubenswrapper[4625]: I1202 14:46:15.640839 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsd27" event={"ID":"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf","Type":"ContainerStarted","Data":"062a08c7115ea1d7c47afcda04342acc3e1d9999daad3cdf1867525151223e98"} Dec 02 14:46:15 crc kubenswrapper[4625]: I1202 14:46:15.643289 4625 generic.go:334] "Generic (PLEG): container finished" podID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerID="02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624" exitCode=0 Dec 02 14:46:15 crc kubenswrapper[4625]: I1202 14:46:15.643358 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rd9zj" event={"ID":"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a","Type":"ContainerDied","Data":"02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624"} Dec 02 14:46:15 crc kubenswrapper[4625]: I1202 14:46:15.643395 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rd9zj" event={"ID":"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a","Type":"ContainerStarted","Data":"8a309b51c8d75c2858c2a8d58b603d174a7d87b2b1edbecf2ec89daeebc893d0"} Dec 02 14:46:16 crc kubenswrapper[4625]: I1202 14:46:16.655542 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rd9zj" event={"ID":"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a","Type":"ContainerStarted","Data":"ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc"} Dec 02 14:46:17 crc kubenswrapper[4625]: I1202 14:46:17.671414 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsd27" event={"ID":"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf","Type":"ContainerStarted","Data":"48ff07217c3a405c916ce91b5f5baf072b43a76f5b37defc5764f162c3330606"} Dec 02 14:46:17 crc kubenswrapper[4625]: I1202 14:46:17.677251 4625 generic.go:334] "Generic (PLEG): container finished" podID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerID="ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc" exitCode=0 Dec 02 14:46:17 crc kubenswrapper[4625]: I1202 14:46:17.677446 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rd9zj" event={"ID":"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a","Type":"ContainerDied","Data":"ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc"} Dec 02 14:46:19 crc kubenswrapper[4625]: I1202 14:46:19.272366 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:46:19 crc kubenswrapper[4625]: I1202 14:46:19.272933 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:46:20 crc kubenswrapper[4625]: I1202 14:46:20.718942 4625 generic.go:334] "Generic (PLEG): container finished" podID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerID="48ff07217c3a405c916ce91b5f5baf072b43a76f5b37defc5764f162c3330606" exitCode=0 Dec 02 14:46:20 crc kubenswrapper[4625]: I1202 14:46:20.718974 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsd27" event={"ID":"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf","Type":"ContainerDied","Data":"48ff07217c3a405c916ce91b5f5baf072b43a76f5b37defc5764f162c3330606"} Dec 02 14:46:20 crc kubenswrapper[4625]: I1202 14:46:20.726449 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rd9zj" event={"ID":"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a","Type":"ContainerStarted","Data":"851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87"} Dec 02 14:46:21 crc kubenswrapper[4625]: I1202 14:46:21.983686 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsd27" event={"ID":"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf","Type":"ContainerStarted","Data":"9d38d729ae4cf457d81a3f9c2dac3ed0c1bc25e1820c22fe74ac4e90f240fd1d"} Dec 02 14:46:23 crc kubenswrapper[4625]: I1202 14:46:23.179883 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qsd27" podStartSLOduration=3.549704236 podStartE2EDuration="9.179853607s" podCreationTimestamp="2025-12-02 14:46:14 +0000 UTC" firstStartedPulling="2025-12-02 14:46:15.64147766 +0000 UTC m=+3731.603654735" lastFinishedPulling="2025-12-02 14:46:21.271627031 +0000 UTC m=+3737.233804106" observedRunningTime="2025-12-02 14:46:23.135802964 +0000 UTC m=+3739.097980049" watchObservedRunningTime="2025-12-02 14:46:23.179853607 +0000 UTC m=+3739.142030682" Dec 02 14:46:23 crc kubenswrapper[4625]: I1202 14:46:23.539552 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rd9zj" podStartSLOduration=5.328338931 podStartE2EDuration="9.539523892s" podCreationTimestamp="2025-12-02 14:46:14 +0000 UTC" firstStartedPulling="2025-12-02 14:46:15.645686443 +0000 UTC m=+3731.607863518" lastFinishedPulling="2025-12-02 14:46:19.856871404 +0000 UTC m=+3735.819048479" observedRunningTime="2025-12-02 14:46:20.788073927 +0000 UTC m=+3736.750251022" watchObservedRunningTime="2025-12-02 14:46:23.539523892 +0000 UTC m=+3739.501700967" Dec 02 14:46:24 crc kubenswrapper[4625]: I1202 14:46:24.436225 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:24 crc kubenswrapper[4625]: I1202 14:46:24.436560 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:24 crc kubenswrapper[4625]: I1202 14:46:24.629256 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:24 crc kubenswrapper[4625]: I1202 14:46:24.629480 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:25 crc kubenswrapper[4625]: I1202 14:46:25.494175 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rd9zj" podUID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerName="registry-server" probeResult="failure" output=< Dec 02 14:46:25 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 14:46:25 crc kubenswrapper[4625]: > Dec 02 14:46:25 crc kubenswrapper[4625]: I1202 14:46:25.683564 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qsd27" podUID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerName="registry-server" probeResult="failure" output=< Dec 02 14:46:25 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 14:46:25 crc kubenswrapper[4625]: > Dec 02 14:46:34 crc kubenswrapper[4625]: I1202 14:46:34.517219 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:34 crc kubenswrapper[4625]: I1202 14:46:34.600105 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:34 crc kubenswrapper[4625]: I1202 14:46:34.685619 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:34 crc kubenswrapper[4625]: I1202 14:46:34.763052 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:34 crc kubenswrapper[4625]: I1202 14:46:34.780394 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rd9zj"] Dec 02 14:46:36 crc kubenswrapper[4625]: I1202 14:46:36.488814 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rd9zj" podUID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerName="registry-server" containerID="cri-o://851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87" gracePeriod=2 Dec 02 14:46:36 crc kubenswrapper[4625]: I1202 14:46:36.978639 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsd27"] Dec 02 14:46:36 crc kubenswrapper[4625]: I1202 14:46:36.979666 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qsd27" podUID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerName="registry-server" containerID="cri-o://9d38d729ae4cf457d81a3f9c2dac3ed0c1bc25e1820c22fe74ac4e90f240fd1d" gracePeriod=2 Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.206231 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.275838 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26jb7\" (UniqueName: \"kubernetes.io/projected/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-kube-api-access-26jb7\") pod \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.276038 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-utilities\") pod \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.276145 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-catalog-content\") pod \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\" (UID: \"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a\") " Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.277579 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-utilities" (OuterVolumeSpecName: "utilities") pod "74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" (UID: "74831e0d-2fc4-4a3d-a7bd-8e646cdb091a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.290121 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-kube-api-access-26jb7" (OuterVolumeSpecName: "kube-api-access-26jb7") pod "74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" (UID: "74831e0d-2fc4-4a3d-a7bd-8e646cdb091a"). InnerVolumeSpecName "kube-api-access-26jb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.298238 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" (UID: "74831e0d-2fc4-4a3d-a7bd-8e646cdb091a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.377993 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26jb7\" (UniqueName: \"kubernetes.io/projected/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-kube-api-access-26jb7\") on node \"crc\" DevicePath \"\"" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.378453 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.378467 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.597885 4625 generic.go:334] "Generic (PLEG): container finished" podID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerID="9d38d729ae4cf457d81a3f9c2dac3ed0c1bc25e1820c22fe74ac4e90f240fd1d" exitCode=0 Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.597939 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsd27" event={"ID":"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf","Type":"ContainerDied","Data":"9d38d729ae4cf457d81a3f9c2dac3ed0c1bc25e1820c22fe74ac4e90f240fd1d"} Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.607227 4625 generic.go:334] "Generic (PLEG): container finished" podID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerID="851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87" exitCode=0 Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.607275 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rd9zj" event={"ID":"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a","Type":"ContainerDied","Data":"851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87"} Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.607321 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rd9zj" event={"ID":"74831e0d-2fc4-4a3d-a7bd-8e646cdb091a","Type":"ContainerDied","Data":"8a309b51c8d75c2858c2a8d58b603d174a7d87b2b1edbecf2ec89daeebc893d0"} Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.607367 4625 scope.go:117] "RemoveContainer" containerID="851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.607581 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rd9zj" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.736973 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.760779 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rd9zj"] Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.776885 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-catalog-content\") pod \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.777102 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xjnt\" (UniqueName: \"kubernetes.io/projected/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-kube-api-access-9xjnt\") pod \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.777270 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-utilities\") pod \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\" (UID: \"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf\") " Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.777840 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rd9zj"] Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.779371 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-utilities" (OuterVolumeSpecName: "utilities") pod "b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" (UID: "b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.787358 4625 scope.go:117] "RemoveContainer" containerID="ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.790290 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-kube-api-access-9xjnt" (OuterVolumeSpecName: "kube-api-access-9xjnt") pod "b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" (UID: "b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf"). InnerVolumeSpecName "kube-api-access-9xjnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.882591 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xjnt\" (UniqueName: \"kubernetes.io/projected/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-kube-api-access-9xjnt\") on node \"crc\" DevicePath \"\"" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.883061 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.884039 4625 scope.go:117] "RemoveContainer" containerID="02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.945038 4625 scope.go:117] "RemoveContainer" containerID="851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.948365 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" (UID: "b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.983972 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:46:37 crc kubenswrapper[4625]: E1202 14:46:37.984191 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87\": container with ID starting with 851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87 not found: ID does not exist" containerID="851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.984247 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87"} err="failed to get container status \"851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87\": rpc error: code = NotFound desc = could not find container \"851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87\": container with ID starting with 851d95b9cfd50036250e84ba7d22cc26520ca8f07d405e5711254f5b8317df87 not found: ID does not exist" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.984285 4625 scope.go:117] "RemoveContainer" containerID="ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc" Dec 02 14:46:37 crc kubenswrapper[4625]: E1202 14:46:37.985798 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc\": container with ID starting with ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc not found: ID does not exist" containerID="ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.985830 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc"} err="failed to get container status \"ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc\": rpc error: code = NotFound desc = could not find container \"ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc\": container with ID starting with ca67bc581a9fb7310388b56b96932be92cd12c477595afca91c3432d528e06cc not found: ID does not exist" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.985848 4625 scope.go:117] "RemoveContainer" containerID="02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624" Dec 02 14:46:37 crc kubenswrapper[4625]: E1202 14:46:37.986169 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624\": container with ID starting with 02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624 not found: ID does not exist" containerID="02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624" Dec 02 14:46:37 crc kubenswrapper[4625]: I1202 14:46:37.986198 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624"} err="failed to get container status \"02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624\": rpc error: code = NotFound desc = could not find container \"02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624\": container with ID starting with 02aa29691eb77946623037b52feef55890fe024333fa1385476f075bd95ba624 not found: ID does not exist" Dec 02 14:46:38 crc kubenswrapper[4625]: I1202 14:46:38.642158 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qsd27" event={"ID":"b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf","Type":"ContainerDied","Data":"062a08c7115ea1d7c47afcda04342acc3e1d9999daad3cdf1867525151223e98"} Dec 02 14:46:38 crc kubenswrapper[4625]: I1202 14:46:38.642269 4625 scope.go:117] "RemoveContainer" containerID="9d38d729ae4cf457d81a3f9c2dac3ed0c1bc25e1820c22fe74ac4e90f240fd1d" Dec 02 14:46:38 crc kubenswrapper[4625]: I1202 14:46:38.642572 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qsd27" Dec 02 14:46:38 crc kubenswrapper[4625]: I1202 14:46:38.696318 4625 scope.go:117] "RemoveContainer" containerID="48ff07217c3a405c916ce91b5f5baf072b43a76f5b37defc5764f162c3330606" Dec 02 14:46:38 crc kubenswrapper[4625]: I1202 14:46:38.711372 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qsd27"] Dec 02 14:46:38 crc kubenswrapper[4625]: I1202 14:46:38.723814 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qsd27"] Dec 02 14:46:38 crc kubenswrapper[4625]: I1202 14:46:38.734382 4625 scope.go:117] "RemoveContainer" containerID="a989ac78a33c5779c7ed5a1e912654342db70ef0158acea16621ef860b6153f7" Dec 02 14:46:38 crc kubenswrapper[4625]: I1202 14:46:38.871121 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" path="/var/lib/kubelet/pods/74831e0d-2fc4-4a3d-a7bd-8e646cdb091a/volumes" Dec 02 14:46:38 crc kubenswrapper[4625]: I1202 14:46:38.872532 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" path="/var/lib/kubelet/pods/b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf/volumes" Dec 02 14:46:49 crc kubenswrapper[4625]: I1202 14:46:49.272031 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:46:49 crc kubenswrapper[4625]: I1202 14:46:49.272747 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:47:19 crc kubenswrapper[4625]: I1202 14:47:19.271954 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:47:19 crc kubenswrapper[4625]: I1202 14:47:19.272463 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:47:19 crc kubenswrapper[4625]: I1202 14:47:19.272564 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:47:19 crc kubenswrapper[4625]: I1202 14:47:19.273782 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:47:19 crc kubenswrapper[4625]: I1202 14:47:19.273845 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" gracePeriod=600 Dec 02 14:47:19 crc kubenswrapper[4625]: E1202 14:47:19.409169 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:47:20 crc kubenswrapper[4625]: I1202 14:47:20.298001 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" exitCode=0 Dec 02 14:47:20 crc kubenswrapper[4625]: I1202 14:47:20.298333 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d"} Dec 02 14:47:20 crc kubenswrapper[4625]: I1202 14:47:20.298384 4625 scope.go:117] "RemoveContainer" containerID="cf7a702099ca825d0bc85749e0559971a5464c9093309f3754d907b473e2f654" Dec 02 14:47:20 crc kubenswrapper[4625]: I1202 14:47:20.299339 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:47:20 crc kubenswrapper[4625]: E1202 14:47:20.299611 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:47:31 crc kubenswrapper[4625]: I1202 14:47:31.857090 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:47:31 crc kubenswrapper[4625]: E1202 14:47:31.857927 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:47:43 crc kubenswrapper[4625]: I1202 14:47:43.857594 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:47:43 crc kubenswrapper[4625]: E1202 14:47:43.858390 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:47:57 crc kubenswrapper[4625]: I1202 14:47:57.858539 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:47:57 crc kubenswrapper[4625]: E1202 14:47:57.859444 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:48:09 crc kubenswrapper[4625]: I1202 14:48:09.857654 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:48:09 crc kubenswrapper[4625]: E1202 14:48:09.858363 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:48:24 crc kubenswrapper[4625]: I1202 14:48:24.879899 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:48:24 crc kubenswrapper[4625]: E1202 14:48:24.881391 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:48:39 crc kubenswrapper[4625]: I1202 14:48:39.858161 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:48:39 crc kubenswrapper[4625]: E1202 14:48:39.859876 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:48:51 crc kubenswrapper[4625]: I1202 14:48:51.857361 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:48:51 crc kubenswrapper[4625]: E1202 14:48:51.858222 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:49:05 crc kubenswrapper[4625]: I1202 14:49:05.856841 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:49:05 crc kubenswrapper[4625]: E1202 14:49:05.857664 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:49:20 crc kubenswrapper[4625]: I1202 14:49:20.857108 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:49:20 crc kubenswrapper[4625]: E1202 14:49:20.857835 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:49:31 crc kubenswrapper[4625]: I1202 14:49:31.876322 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:49:31 crc kubenswrapper[4625]: E1202 14:49:31.878285 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:49:46 crc kubenswrapper[4625]: I1202 14:49:46.856520 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:49:46 crc kubenswrapper[4625]: E1202 14:49:46.857457 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:50:00 crc kubenswrapper[4625]: I1202 14:50:00.856904 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:50:00 crc kubenswrapper[4625]: E1202 14:50:00.858235 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:50:12 crc kubenswrapper[4625]: I1202 14:50:12.857112 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:50:12 crc kubenswrapper[4625]: E1202 14:50:12.859487 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:50:24 crc kubenswrapper[4625]: I1202 14:50:24.863562 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:50:24 crc kubenswrapper[4625]: E1202 14:50:24.866966 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:50:36 crc kubenswrapper[4625]: I1202 14:50:36.856547 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:50:36 crc kubenswrapper[4625]: E1202 14:50:36.857550 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:50:51 crc kubenswrapper[4625]: I1202 14:50:51.858013 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:50:51 crc kubenswrapper[4625]: E1202 14:50:51.858672 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:51:03 crc kubenswrapper[4625]: I1202 14:51:03.856465 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:51:03 crc kubenswrapper[4625]: E1202 14:51:03.857572 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:51:18 crc kubenswrapper[4625]: I1202 14:51:18.857589 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:51:18 crc kubenswrapper[4625]: E1202 14:51:18.858674 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:51:33 crc kubenswrapper[4625]: I1202 14:51:33.856994 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:51:33 crc kubenswrapper[4625]: E1202 14:51:33.857901 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:51:47 crc kubenswrapper[4625]: I1202 14:51:47.856681 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:51:47 crc kubenswrapper[4625]: E1202 14:51:47.857830 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:51:59 crc kubenswrapper[4625]: I1202 14:51:59.856941 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:51:59 crc kubenswrapper[4625]: E1202 14:51:59.858067 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:52:14 crc kubenswrapper[4625]: I1202 14:52:14.870623 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:52:14 crc kubenswrapper[4625]: E1202 14:52:14.872115 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:52:29 crc kubenswrapper[4625]: I1202 14:52:29.856365 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:52:31 crc kubenswrapper[4625]: I1202 14:52:31.274717 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"a8a1d35267ba95ba31f7250bf07dbd47dd1a309aae99409cf289867a32ae2221"} Dec 02 14:53:34 crc kubenswrapper[4625]: I1202 14:53:34.935824 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ddh2k"] Dec 02 14:53:34 crc kubenswrapper[4625]: E1202 14:53:34.937079 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerName="registry-server" Dec 02 14:53:34 crc kubenswrapper[4625]: I1202 14:53:34.937119 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerName="registry-server" Dec 02 14:53:34 crc kubenswrapper[4625]: E1202 14:53:34.937142 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerName="extract-content" Dec 02 14:53:34 crc kubenswrapper[4625]: I1202 14:53:34.937152 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerName="extract-content" Dec 02 14:53:34 crc kubenswrapper[4625]: E1202 14:53:34.937180 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerName="registry-server" Dec 02 14:53:34 crc kubenswrapper[4625]: I1202 14:53:34.937187 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerName="registry-server" Dec 02 14:53:34 crc kubenswrapper[4625]: E1202 14:53:34.937197 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerName="extract-utilities" Dec 02 14:53:34 crc kubenswrapper[4625]: I1202 14:53:34.937205 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerName="extract-utilities" Dec 02 14:53:34 crc kubenswrapper[4625]: E1202 14:53:34.937223 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerName="extract-content" Dec 02 14:53:34 crc kubenswrapper[4625]: I1202 14:53:34.937232 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerName="extract-content" Dec 02 14:53:34 crc kubenswrapper[4625]: E1202 14:53:34.937265 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerName="extract-utilities" Dec 02 14:53:34 crc kubenswrapper[4625]: I1202 14:53:34.937271 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerName="extract-utilities" Dec 02 14:53:34 crc kubenswrapper[4625]: I1202 14:53:34.937580 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="74831e0d-2fc4-4a3d-a7bd-8e646cdb091a" containerName="registry-server" Dec 02 14:53:34 crc kubenswrapper[4625]: I1202 14:53:34.937599 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6a8bafb-e5b1-42a9-8c05-b413ce8f4acf" containerName="registry-server" Dec 02 14:53:34 crc kubenswrapper[4625]: I1202 14:53:34.939450 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.015753 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ddh2k"] Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.050503 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-catalog-content\") pod \"community-operators-ddh2k\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.050598 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-utilities\") pod \"community-operators-ddh2k\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.050643 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5jjk\" (UniqueName: \"kubernetes.io/projected/81d85589-8e77-4872-a37f-4318e132dafe-kube-api-access-m5jjk\") pod \"community-operators-ddh2k\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.153105 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-catalog-content\") pod \"community-operators-ddh2k\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.153475 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-utilities\") pod \"community-operators-ddh2k\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.153591 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5jjk\" (UniqueName: \"kubernetes.io/projected/81d85589-8e77-4872-a37f-4318e132dafe-kube-api-access-m5jjk\") pod \"community-operators-ddh2k\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.154555 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-catalog-content\") pod \"community-operators-ddh2k\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.154895 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-utilities\") pod \"community-operators-ddh2k\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.172680 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5jjk\" (UniqueName: \"kubernetes.io/projected/81d85589-8e77-4872-a37f-4318e132dafe-kube-api-access-m5jjk\") pod \"community-operators-ddh2k\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.272088 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:35 crc kubenswrapper[4625]: I1202 14:53:35.883169 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ddh2k"] Dec 02 14:53:36 crc kubenswrapper[4625]: I1202 14:53:36.011713 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddh2k" event={"ID":"81d85589-8e77-4872-a37f-4318e132dafe","Type":"ContainerStarted","Data":"b000c16e4fd2b808d330407dccc89f48c74b05dd6037593cabba0c3bd74a1927"} Dec 02 14:53:36 crc kubenswrapper[4625]: E1202 14:53:36.383685 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81d85589_8e77_4872_a37f_4318e132dafe.slice/crio-conmon-178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:53:37 crc kubenswrapper[4625]: I1202 14:53:37.024361 4625 generic.go:334] "Generic (PLEG): container finished" podID="81d85589-8e77-4872-a37f-4318e132dafe" containerID="178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25" exitCode=0 Dec 02 14:53:37 crc kubenswrapper[4625]: I1202 14:53:37.024486 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddh2k" event={"ID":"81d85589-8e77-4872-a37f-4318e132dafe","Type":"ContainerDied","Data":"178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25"} Dec 02 14:53:37 crc kubenswrapper[4625]: I1202 14:53:37.027375 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 14:53:39 crc kubenswrapper[4625]: I1202 14:53:39.047775 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddh2k" event={"ID":"81d85589-8e77-4872-a37f-4318e132dafe","Type":"ContainerStarted","Data":"c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505"} Dec 02 14:53:40 crc kubenswrapper[4625]: I1202 14:53:40.068366 4625 generic.go:334] "Generic (PLEG): container finished" podID="81d85589-8e77-4872-a37f-4318e132dafe" containerID="c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505" exitCode=0 Dec 02 14:53:40 crc kubenswrapper[4625]: I1202 14:53:40.068438 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddh2k" event={"ID":"81d85589-8e77-4872-a37f-4318e132dafe","Type":"ContainerDied","Data":"c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505"} Dec 02 14:53:41 crc kubenswrapper[4625]: I1202 14:53:41.081495 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddh2k" event={"ID":"81d85589-8e77-4872-a37f-4318e132dafe","Type":"ContainerStarted","Data":"7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a"} Dec 02 14:53:41 crc kubenswrapper[4625]: I1202 14:53:41.115115 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ddh2k" podStartSLOduration=3.482579151 podStartE2EDuration="7.115013053s" podCreationTimestamp="2025-12-02 14:53:34 +0000 UTC" firstStartedPulling="2025-12-02 14:53:37.026913386 +0000 UTC m=+4172.989090481" lastFinishedPulling="2025-12-02 14:53:40.659347308 +0000 UTC m=+4176.621524383" observedRunningTime="2025-12-02 14:53:41.107962043 +0000 UTC m=+4177.070139118" watchObservedRunningTime="2025-12-02 14:53:41.115013053 +0000 UTC m=+4177.077190128" Dec 02 14:53:45 crc kubenswrapper[4625]: I1202 14:53:45.273543 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:45 crc kubenswrapper[4625]: I1202 14:53:45.274059 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:45 crc kubenswrapper[4625]: I1202 14:53:45.333930 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:46 crc kubenswrapper[4625]: I1202 14:53:46.258413 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:46 crc kubenswrapper[4625]: I1202 14:53:46.330592 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ddh2k"] Dec 02 14:53:48 crc kubenswrapper[4625]: I1202 14:53:48.220823 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ddh2k" podUID="81d85589-8e77-4872-a37f-4318e132dafe" containerName="registry-server" containerID="cri-o://7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a" gracePeriod=2 Dec 02 14:53:48 crc kubenswrapper[4625]: I1202 14:53:48.747176 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:48 crc kubenswrapper[4625]: I1202 14:53:48.928191 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-utilities\") pod \"81d85589-8e77-4872-a37f-4318e132dafe\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " Dec 02 14:53:48 crc kubenswrapper[4625]: I1202 14:53:48.928619 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5jjk\" (UniqueName: \"kubernetes.io/projected/81d85589-8e77-4872-a37f-4318e132dafe-kube-api-access-m5jjk\") pod \"81d85589-8e77-4872-a37f-4318e132dafe\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " Dec 02 14:53:48 crc kubenswrapper[4625]: I1202 14:53:48.928681 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-catalog-content\") pod \"81d85589-8e77-4872-a37f-4318e132dafe\" (UID: \"81d85589-8e77-4872-a37f-4318e132dafe\") " Dec 02 14:53:48 crc kubenswrapper[4625]: I1202 14:53:48.929233 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-utilities" (OuterVolumeSpecName: "utilities") pod "81d85589-8e77-4872-a37f-4318e132dafe" (UID: "81d85589-8e77-4872-a37f-4318e132dafe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:53:48 crc kubenswrapper[4625]: I1202 14:53:48.929476 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:53:48 crc kubenswrapper[4625]: I1202 14:53:48.941598 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d85589-8e77-4872-a37f-4318e132dafe-kube-api-access-m5jjk" (OuterVolumeSpecName: "kube-api-access-m5jjk") pod "81d85589-8e77-4872-a37f-4318e132dafe" (UID: "81d85589-8e77-4872-a37f-4318e132dafe"). InnerVolumeSpecName "kube-api-access-m5jjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:53:48 crc kubenswrapper[4625]: I1202 14:53:48.984660 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81d85589-8e77-4872-a37f-4318e132dafe" (UID: "81d85589-8e77-4872-a37f-4318e132dafe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.031189 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d85589-8e77-4872-a37f-4318e132dafe-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.031227 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5jjk\" (UniqueName: \"kubernetes.io/projected/81d85589-8e77-4872-a37f-4318e132dafe-kube-api-access-m5jjk\") on node \"crc\" DevicePath \"\"" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.234948 4625 generic.go:334] "Generic (PLEG): container finished" podID="81d85589-8e77-4872-a37f-4318e132dafe" containerID="7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a" exitCode=0 Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.235003 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddh2k" event={"ID":"81d85589-8e77-4872-a37f-4318e132dafe","Type":"ContainerDied","Data":"7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a"} Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.235041 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ddh2k" event={"ID":"81d85589-8e77-4872-a37f-4318e132dafe","Type":"ContainerDied","Data":"b000c16e4fd2b808d330407dccc89f48c74b05dd6037593cabba0c3bd74a1927"} Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.235059 4625 scope.go:117] "RemoveContainer" containerID="7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.235252 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ddh2k" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.265411 4625 scope.go:117] "RemoveContainer" containerID="c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.276206 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ddh2k"] Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.289241 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ddh2k"] Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.293386 4625 scope.go:117] "RemoveContainer" containerID="178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.335945 4625 scope.go:117] "RemoveContainer" containerID="7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a" Dec 02 14:53:49 crc kubenswrapper[4625]: E1202 14:53:49.338262 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a\": container with ID starting with 7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a not found: ID does not exist" containerID="7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.338378 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a"} err="failed to get container status \"7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a\": rpc error: code = NotFound desc = could not find container \"7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a\": container with ID starting with 7460f25cad6151662007bb99e9e0a57678df64e11f0c9b2c44813da4501f712a not found: ID does not exist" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.338418 4625 scope.go:117] "RemoveContainer" containerID="c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505" Dec 02 14:53:49 crc kubenswrapper[4625]: E1202 14:53:49.338968 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505\": container with ID starting with c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505 not found: ID does not exist" containerID="c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.339003 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505"} err="failed to get container status \"c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505\": rpc error: code = NotFound desc = could not find container \"c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505\": container with ID starting with c283c96b6e9045bf9a3d76c4c057b94b0ecef79566ff35cfdd275beed41b0505 not found: ID does not exist" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.339022 4625 scope.go:117] "RemoveContainer" containerID="178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25" Dec 02 14:53:49 crc kubenswrapper[4625]: E1202 14:53:49.339541 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25\": container with ID starting with 178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25 not found: ID does not exist" containerID="178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25" Dec 02 14:53:49 crc kubenswrapper[4625]: I1202 14:53:49.339579 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25"} err="failed to get container status \"178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25\": rpc error: code = NotFound desc = could not find container \"178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25\": container with ID starting with 178e31c9b6605576e4195dd6b686c6ddc192768d3ccdad73a6e9573a23e39d25 not found: ID does not exist" Dec 02 14:53:50 crc kubenswrapper[4625]: I1202 14:53:50.870053 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d85589-8e77-4872-a37f-4318e132dafe" path="/var/lib/kubelet/pods/81d85589-8e77-4872-a37f-4318e132dafe/volumes" Dec 02 14:54:49 crc kubenswrapper[4625]: I1202 14:54:49.271294 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:54:49 crc kubenswrapper[4625]: I1202 14:54:49.274074 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:55:19 crc kubenswrapper[4625]: I1202 14:55:19.270987 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:55:19 crc kubenswrapper[4625]: I1202 14:55:19.271679 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:55:31 crc kubenswrapper[4625]: E1202 14:55:31.901779 4625 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.046s" Dec 02 14:55:49 crc kubenswrapper[4625]: I1202 14:55:49.272092 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:55:49 crc kubenswrapper[4625]: I1202 14:55:49.272755 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:55:49 crc kubenswrapper[4625]: I1202 14:55:49.272819 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:55:49 crc kubenswrapper[4625]: I1202 14:55:49.274036 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8a1d35267ba95ba31f7250bf07dbd47dd1a309aae99409cf289867a32ae2221"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:55:49 crc kubenswrapper[4625]: I1202 14:55:49.274120 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://a8a1d35267ba95ba31f7250bf07dbd47dd1a309aae99409cf289867a32ae2221" gracePeriod=600 Dec 02 14:55:50 crc kubenswrapper[4625]: I1202 14:55:50.109188 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="a8a1d35267ba95ba31f7250bf07dbd47dd1a309aae99409cf289867a32ae2221" exitCode=0 Dec 02 14:55:50 crc kubenswrapper[4625]: I1202 14:55:50.109251 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"a8a1d35267ba95ba31f7250bf07dbd47dd1a309aae99409cf289867a32ae2221"} Dec 02 14:55:50 crc kubenswrapper[4625]: I1202 14:55:50.109876 4625 scope.go:117] "RemoveContainer" containerID="0dd4556a8d819ae2279c312843dc7f99c7262548b6a748928823c2523d06091d" Dec 02 14:55:51 crc kubenswrapper[4625]: I1202 14:55:51.133226 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633"} Dec 02 14:56:58 crc kubenswrapper[4625]: I1202 14:56:58.943100 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gsg7d"] Dec 02 14:56:58 crc kubenswrapper[4625]: E1202 14:56:58.946063 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d85589-8e77-4872-a37f-4318e132dafe" containerName="extract-utilities" Dec 02 14:56:58 crc kubenswrapper[4625]: I1202 14:56:58.946165 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d85589-8e77-4872-a37f-4318e132dafe" containerName="extract-utilities" Dec 02 14:56:58 crc kubenswrapper[4625]: E1202 14:56:58.946273 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d85589-8e77-4872-a37f-4318e132dafe" containerName="registry-server" Dec 02 14:56:58 crc kubenswrapper[4625]: I1202 14:56:58.946374 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d85589-8e77-4872-a37f-4318e132dafe" containerName="registry-server" Dec 02 14:56:58 crc kubenswrapper[4625]: E1202 14:56:58.946444 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d85589-8e77-4872-a37f-4318e132dafe" containerName="extract-content" Dec 02 14:56:58 crc kubenswrapper[4625]: I1202 14:56:58.946524 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d85589-8e77-4872-a37f-4318e132dafe" containerName="extract-content" Dec 02 14:56:58 crc kubenswrapper[4625]: I1202 14:56:58.946834 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d85589-8e77-4872-a37f-4318e132dafe" containerName="registry-server" Dec 02 14:56:58 crc kubenswrapper[4625]: I1202 14:56:58.951949 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.013951 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsg7d"] Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.035211 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-utilities\") pod \"redhat-marketplace-gsg7d\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.035726 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjpfp\" (UniqueName: \"kubernetes.io/projected/89ecb849-6a38-432f-afb1-dbc8cf105361-kube-api-access-rjpfp\") pod \"redhat-marketplace-gsg7d\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.035910 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-catalog-content\") pod \"redhat-marketplace-gsg7d\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.138592 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-catalog-content\") pod \"redhat-marketplace-gsg7d\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.138976 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-utilities\") pod \"redhat-marketplace-gsg7d\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.139117 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjpfp\" (UniqueName: \"kubernetes.io/projected/89ecb849-6a38-432f-afb1-dbc8cf105361-kube-api-access-rjpfp\") pod \"redhat-marketplace-gsg7d\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.141850 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-utilities\") pod \"redhat-marketplace-gsg7d\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.142705 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-catalog-content\") pod \"redhat-marketplace-gsg7d\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.188108 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjpfp\" (UniqueName: \"kubernetes.io/projected/89ecb849-6a38-432f-afb1-dbc8cf105361-kube-api-access-rjpfp\") pod \"redhat-marketplace-gsg7d\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.325128 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:56:59 crc kubenswrapper[4625]: I1202 14:56:59.996300 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsg7d"] Dec 02 14:57:00 crc kubenswrapper[4625]: W1202 14:57:00.048597 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89ecb849_6a38_432f_afb1_dbc8cf105361.slice/crio-34d924e230a2e9f2251fb4f523c4bec6779f05409c09cafa2fe4ce6af884a26d WatchSource:0}: Error finding container 34d924e230a2e9f2251fb4f523c4bec6779f05409c09cafa2fe4ce6af884a26d: Status 404 returned error can't find the container with id 34d924e230a2e9f2251fb4f523c4bec6779f05409c09cafa2fe4ce6af884a26d Dec 02 14:57:00 crc kubenswrapper[4625]: I1202 14:57:00.067229 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsg7d" event={"ID":"89ecb849-6a38-432f-afb1-dbc8cf105361","Type":"ContainerStarted","Data":"34d924e230a2e9f2251fb4f523c4bec6779f05409c09cafa2fe4ce6af884a26d"} Dec 02 14:57:01 crc kubenswrapper[4625]: I1202 14:57:01.081187 4625 generic.go:334] "Generic (PLEG): container finished" podID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerID="dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13" exitCode=0 Dec 02 14:57:01 crc kubenswrapper[4625]: I1202 14:57:01.081432 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsg7d" event={"ID":"89ecb849-6a38-432f-afb1-dbc8cf105361","Type":"ContainerDied","Data":"dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13"} Dec 02 14:57:02 crc kubenswrapper[4625]: I1202 14:57:02.098042 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsg7d" event={"ID":"89ecb849-6a38-432f-afb1-dbc8cf105361","Type":"ContainerStarted","Data":"3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367"} Dec 02 14:57:03 crc kubenswrapper[4625]: I1202 14:57:03.108620 4625 generic.go:334] "Generic (PLEG): container finished" podID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerID="3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367" exitCode=0 Dec 02 14:57:03 crc kubenswrapper[4625]: I1202 14:57:03.108898 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsg7d" event={"ID":"89ecb849-6a38-432f-afb1-dbc8cf105361","Type":"ContainerDied","Data":"3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367"} Dec 02 14:57:04 crc kubenswrapper[4625]: I1202 14:57:04.122403 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsg7d" event={"ID":"89ecb849-6a38-432f-afb1-dbc8cf105361","Type":"ContainerStarted","Data":"80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810"} Dec 02 14:57:04 crc kubenswrapper[4625]: I1202 14:57:04.146547 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gsg7d" podStartSLOduration=3.694505215 podStartE2EDuration="6.146493067s" podCreationTimestamp="2025-12-02 14:56:58 +0000 UTC" firstStartedPulling="2025-12-02 14:57:01.083990753 +0000 UTC m=+4377.046167838" lastFinishedPulling="2025-12-02 14:57:03.535978615 +0000 UTC m=+4379.498155690" observedRunningTime="2025-12-02 14:57:04.140839192 +0000 UTC m=+4380.103016287" watchObservedRunningTime="2025-12-02 14:57:04.146493067 +0000 UTC m=+4380.108670142" Dec 02 14:57:09 crc kubenswrapper[4625]: I1202 14:57:09.325678 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:57:09 crc kubenswrapper[4625]: I1202 14:57:09.326325 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:57:09 crc kubenswrapper[4625]: I1202 14:57:09.378778 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:57:10 crc kubenswrapper[4625]: I1202 14:57:10.261394 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:57:10 crc kubenswrapper[4625]: I1202 14:57:10.328133 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsg7d"] Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.202408 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gsg7d" podUID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerName="registry-server" containerID="cri-o://80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810" gracePeriod=2 Dec 02 14:57:12 crc kubenswrapper[4625]: E1202 14:57:12.415989 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89ecb849_6a38_432f_afb1_dbc8cf105361.slice/crio-80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810.scope\": RecentStats: unable to find data in memory cache]" Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.712663 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.784970 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjpfp\" (UniqueName: \"kubernetes.io/projected/89ecb849-6a38-432f-afb1-dbc8cf105361-kube-api-access-rjpfp\") pod \"89ecb849-6a38-432f-afb1-dbc8cf105361\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.785113 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-catalog-content\") pod \"89ecb849-6a38-432f-afb1-dbc8cf105361\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.785443 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-utilities\") pod \"89ecb849-6a38-432f-afb1-dbc8cf105361\" (UID: \"89ecb849-6a38-432f-afb1-dbc8cf105361\") " Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.787050 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-utilities" (OuterVolumeSpecName: "utilities") pod "89ecb849-6a38-432f-afb1-dbc8cf105361" (UID: "89ecb849-6a38-432f-afb1-dbc8cf105361"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.793954 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ecb849-6a38-432f-afb1-dbc8cf105361-kube-api-access-rjpfp" (OuterVolumeSpecName: "kube-api-access-rjpfp") pod "89ecb849-6a38-432f-afb1-dbc8cf105361" (UID: "89ecb849-6a38-432f-afb1-dbc8cf105361"). InnerVolumeSpecName "kube-api-access-rjpfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.813599 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89ecb849-6a38-432f-afb1-dbc8cf105361" (UID: "89ecb849-6a38-432f-afb1-dbc8cf105361"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.889085 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.889129 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjpfp\" (UniqueName: \"kubernetes.io/projected/89ecb849-6a38-432f-afb1-dbc8cf105361-kube-api-access-rjpfp\") on node \"crc\" DevicePath \"\"" Dec 02 14:57:12 crc kubenswrapper[4625]: I1202 14:57:12.889147 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89ecb849-6a38-432f-afb1-dbc8cf105361-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.219464 4625 generic.go:334] "Generic (PLEG): container finished" podID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerID="80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810" exitCode=0 Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.219830 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsg7d" event={"ID":"89ecb849-6a38-432f-afb1-dbc8cf105361","Type":"ContainerDied","Data":"80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810"} Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.219882 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsg7d" event={"ID":"89ecb849-6a38-432f-afb1-dbc8cf105361","Type":"ContainerDied","Data":"34d924e230a2e9f2251fb4f523c4bec6779f05409c09cafa2fe4ce6af884a26d"} Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.219908 4625 scope.go:117] "RemoveContainer" containerID="80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810" Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.221394 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsg7d" Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.263848 4625 scope.go:117] "RemoveContainer" containerID="3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367" Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.272491 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsg7d"] Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.279974 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsg7d"] Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.306082 4625 scope.go:117] "RemoveContainer" containerID="dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13" Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.340651 4625 scope.go:117] "RemoveContainer" containerID="80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810" Dec 02 14:57:13 crc kubenswrapper[4625]: E1202 14:57:13.341467 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810\": container with ID starting with 80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810 not found: ID does not exist" containerID="80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810" Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.341530 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810"} err="failed to get container status \"80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810\": rpc error: code = NotFound desc = could not find container \"80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810\": container with ID starting with 80ce336888da98ea3a2b2de424acb7701711cc00e7b8ceb69cab1d3f48f4a810 not found: ID does not exist" Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.341558 4625 scope.go:117] "RemoveContainer" containerID="3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367" Dec 02 14:57:13 crc kubenswrapper[4625]: E1202 14:57:13.342129 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367\": container with ID starting with 3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367 not found: ID does not exist" containerID="3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367" Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.342175 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367"} err="failed to get container status \"3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367\": rpc error: code = NotFound desc = could not find container \"3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367\": container with ID starting with 3fa08c852ec3dbdf17a547f5053e58a53beaeeb2cfccc46a08e12beb092dc367 not found: ID does not exist" Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.342213 4625 scope.go:117] "RemoveContainer" containerID="dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13" Dec 02 14:57:13 crc kubenswrapper[4625]: E1202 14:57:13.342683 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13\": container with ID starting with dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13 not found: ID does not exist" containerID="dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13" Dec 02 14:57:13 crc kubenswrapper[4625]: I1202 14:57:13.342730 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13"} err="failed to get container status \"dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13\": rpc error: code = NotFound desc = could not find container \"dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13\": container with ID starting with dfb7d20e9ea4aac01fe833bcb307027d602a8f5c5633b1f9d50622e43baaef13 not found: ID does not exist" Dec 02 14:57:14 crc kubenswrapper[4625]: I1202 14:57:14.877219 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ecb849-6a38-432f-afb1-dbc8cf105361" path="/var/lib/kubelet/pods/89ecb849-6a38-432f-afb1-dbc8cf105361/volumes" Dec 02 14:58:19 crc kubenswrapper[4625]: I1202 14:58:19.270928 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:58:19 crc kubenswrapper[4625]: I1202 14:58:19.271630 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:58:49 crc kubenswrapper[4625]: I1202 14:58:49.271741 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:58:49 crc kubenswrapper[4625]: I1202 14:58:49.272743 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:59:19 crc kubenswrapper[4625]: I1202 14:59:19.271234 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 14:59:19 crc kubenswrapper[4625]: I1202 14:59:19.271937 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 14:59:19 crc kubenswrapper[4625]: I1202 14:59:19.272016 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 14:59:19 crc kubenswrapper[4625]: I1202 14:59:19.273361 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 14:59:19 crc kubenswrapper[4625]: I1202 14:59:19.273469 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" gracePeriod=600 Dec 02 14:59:19 crc kubenswrapper[4625]: E1202 14:59:19.404670 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:59:19 crc kubenswrapper[4625]: I1202 14:59:19.788450 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" exitCode=0 Dec 02 14:59:19 crc kubenswrapper[4625]: I1202 14:59:19.788569 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633"} Dec 02 14:59:19 crc kubenswrapper[4625]: I1202 14:59:19.788878 4625 scope.go:117] "RemoveContainer" containerID="a8a1d35267ba95ba31f7250bf07dbd47dd1a309aae99409cf289867a32ae2221" Dec 02 14:59:19 crc kubenswrapper[4625]: I1202 14:59:19.790220 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 14:59:19 crc kubenswrapper[4625]: E1202 14:59:19.793594 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:59:30 crc kubenswrapper[4625]: I1202 14:59:30.856696 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 14:59:30 crc kubenswrapper[4625]: E1202 14:59:30.857797 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:59:42 crc kubenswrapper[4625]: I1202 14:59:42.856932 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 14:59:42 crc kubenswrapper[4625]: E1202 14:59:42.858035 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 14:59:54 crc kubenswrapper[4625]: I1202 14:59:54.876537 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 14:59:54 crc kubenswrapper[4625]: E1202 14:59:54.878103 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.242591 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg"] Dec 02 15:00:00 crc kubenswrapper[4625]: E1202 15:00:00.243924 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerName="extract-utilities" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.243942 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerName="extract-utilities" Dec 02 15:00:00 crc kubenswrapper[4625]: E1202 15:00:00.243998 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerName="extract-content" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.244008 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerName="extract-content" Dec 02 15:00:00 crc kubenswrapper[4625]: E1202 15:00:00.244055 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerName="registry-server" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.244066 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerName="registry-server" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.244580 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ecb849-6a38-432f-afb1-dbc8cf105361" containerName="registry-server" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.245761 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.252100 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.257623 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.308809 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg"] Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.320726 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnrdr\" (UniqueName: \"kubernetes.io/projected/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-kube-api-access-nnrdr\") pod \"collect-profiles-29411460-zbvgg\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.320864 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-secret-volume\") pod \"collect-profiles-29411460-zbvgg\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.321192 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-config-volume\") pod \"collect-profiles-29411460-zbvgg\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.424024 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-config-volume\") pod \"collect-profiles-29411460-zbvgg\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.424182 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnrdr\" (UniqueName: \"kubernetes.io/projected/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-kube-api-access-nnrdr\") pod \"collect-profiles-29411460-zbvgg\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.424431 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-secret-volume\") pod \"collect-profiles-29411460-zbvgg\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.425253 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-config-volume\") pod \"collect-profiles-29411460-zbvgg\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.431120 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-secret-volume\") pod \"collect-profiles-29411460-zbvgg\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.453871 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnrdr\" (UniqueName: \"kubernetes.io/projected/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-kube-api-access-nnrdr\") pod \"collect-profiles-29411460-zbvgg\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:00 crc kubenswrapper[4625]: I1202 15:00:00.613284 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:01 crc kubenswrapper[4625]: I1202 15:00:01.163077 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg"] Dec 02 15:00:01 crc kubenswrapper[4625]: I1202 15:00:01.323157 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" event={"ID":"cd590abf-4b6f-4159-a8d0-2b02e2df45d6","Type":"ContainerStarted","Data":"80c54c1c8992ffeb37b78f20ff67ccf24acbe43933f41145fc77b10577e298bd"} Dec 02 15:00:02 crc kubenswrapper[4625]: I1202 15:00:02.336769 4625 generic.go:334] "Generic (PLEG): container finished" podID="cd590abf-4b6f-4159-a8d0-2b02e2df45d6" containerID="8d5cf66ba37baa7e110097a7bfd1e4c54c54211b13307cc4612dd99910018bf7" exitCode=0 Dec 02 15:00:02 crc kubenswrapper[4625]: I1202 15:00:02.337034 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" event={"ID":"cd590abf-4b6f-4159-a8d0-2b02e2df45d6","Type":"ContainerDied","Data":"8d5cf66ba37baa7e110097a7bfd1e4c54c54211b13307cc4612dd99910018bf7"} Dec 02 15:00:03 crc kubenswrapper[4625]: I1202 15:00:03.866754 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.015778 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-config-volume\") pod \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.015853 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnrdr\" (UniqueName: \"kubernetes.io/projected/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-kube-api-access-nnrdr\") pod \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.016133 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-secret-volume\") pod \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\" (UID: \"cd590abf-4b6f-4159-a8d0-2b02e2df45d6\") " Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.016741 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-config-volume" (OuterVolumeSpecName: "config-volume") pod "cd590abf-4b6f-4159-a8d0-2b02e2df45d6" (UID: "cd590abf-4b6f-4159-a8d0-2b02e2df45d6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.017999 4625 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.026218 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-kube-api-access-nnrdr" (OuterVolumeSpecName: "kube-api-access-nnrdr") pod "cd590abf-4b6f-4159-a8d0-2b02e2df45d6" (UID: "cd590abf-4b6f-4159-a8d0-2b02e2df45d6"). InnerVolumeSpecName "kube-api-access-nnrdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.031172 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cd590abf-4b6f-4159-a8d0-2b02e2df45d6" (UID: "cd590abf-4b6f-4159-a8d0-2b02e2df45d6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.122092 4625 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.122188 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnrdr\" (UniqueName: \"kubernetes.io/projected/cd590abf-4b6f-4159-a8d0-2b02e2df45d6-kube-api-access-nnrdr\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.360725 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" event={"ID":"cd590abf-4b6f-4159-a8d0-2b02e2df45d6","Type":"ContainerDied","Data":"80c54c1c8992ffeb37b78f20ff67ccf24acbe43933f41145fc77b10577e298bd"} Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.360783 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411460-zbvgg" Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.360791 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c54c1c8992ffeb37b78f20ff67ccf24acbe43933f41145fc77b10577e298bd" Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.977296 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st"] Dec 02 15:00:04 crc kubenswrapper[4625]: I1202 15:00:04.990584 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411415-6v4st"] Dec 02 15:00:06 crc kubenswrapper[4625]: I1202 15:00:06.856889 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:00:06 crc kubenswrapper[4625]: E1202 15:00:06.857161 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:00:06 crc kubenswrapper[4625]: I1202 15:00:06.870992 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158a665a-fd81-4505-94cf-75154b25d97c" path="/var/lib/kubelet/pods/158a665a-fd81-4505-94cf-75154b25d97c/volumes" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.707583 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t9cvk"] Dec 02 15:00:07 crc kubenswrapper[4625]: E1202 15:00:07.708999 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd590abf-4b6f-4159-a8d0-2b02e2df45d6" containerName="collect-profiles" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.709035 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd590abf-4b6f-4159-a8d0-2b02e2df45d6" containerName="collect-profiles" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.717408 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd590abf-4b6f-4159-a8d0-2b02e2df45d6" containerName="collect-profiles" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.724634 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.765117 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9cvk"] Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.821157 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-utilities\") pod \"certified-operators-t9cvk\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.821265 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5245f\" (UniqueName: \"kubernetes.io/projected/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-kube-api-access-5245f\") pod \"certified-operators-t9cvk\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.821352 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-catalog-content\") pod \"certified-operators-t9cvk\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.924422 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-utilities\") pod \"certified-operators-t9cvk\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.924487 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5245f\" (UniqueName: \"kubernetes.io/projected/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-kube-api-access-5245f\") pod \"certified-operators-t9cvk\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.924543 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-catalog-content\") pod \"certified-operators-t9cvk\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.925554 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-utilities\") pod \"certified-operators-t9cvk\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.925779 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-catalog-content\") pod \"certified-operators-t9cvk\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:07 crc kubenswrapper[4625]: I1202 15:00:07.954035 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5245f\" (UniqueName: \"kubernetes.io/projected/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-kube-api-access-5245f\") pod \"certified-operators-t9cvk\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:08 crc kubenswrapper[4625]: I1202 15:00:08.060486 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:08 crc kubenswrapper[4625]: I1202 15:00:08.640513 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9cvk"] Dec 02 15:00:09 crc kubenswrapper[4625]: I1202 15:00:09.412510 4625 generic.go:334] "Generic (PLEG): container finished" podID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerID="2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd" exitCode=0 Dec 02 15:00:09 crc kubenswrapper[4625]: I1202 15:00:09.412636 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9cvk" event={"ID":"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355","Type":"ContainerDied","Data":"2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd"} Dec 02 15:00:09 crc kubenswrapper[4625]: I1202 15:00:09.412931 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9cvk" event={"ID":"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355","Type":"ContainerStarted","Data":"2833260d7a06fedad7c585f56daad6384146e0883b143fe080f4abd5d66dec3d"} Dec 02 15:00:09 crc kubenswrapper[4625]: I1202 15:00:09.418034 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:00:10 crc kubenswrapper[4625]: I1202 15:00:10.852754 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rkxdz"] Dec 02 15:00:10 crc kubenswrapper[4625]: I1202 15:00:10.855969 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:10 crc kubenswrapper[4625]: I1202 15:00:10.871629 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rkxdz"] Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.013456 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2jbb\" (UniqueName: \"kubernetes.io/projected/8d8f43ef-7def-4053-ada9-a9af5ece146b-kube-api-access-p2jbb\") pod \"redhat-operators-rkxdz\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.013580 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-catalog-content\") pod \"redhat-operators-rkxdz\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.013714 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-utilities\") pod \"redhat-operators-rkxdz\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.115925 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-utilities\") pod \"redhat-operators-rkxdz\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.116008 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2jbb\" (UniqueName: \"kubernetes.io/projected/8d8f43ef-7def-4053-ada9-a9af5ece146b-kube-api-access-p2jbb\") pod \"redhat-operators-rkxdz\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.116099 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-catalog-content\") pod \"redhat-operators-rkxdz\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.116682 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-utilities\") pod \"redhat-operators-rkxdz\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.116842 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-catalog-content\") pod \"redhat-operators-rkxdz\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.136026 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2jbb\" (UniqueName: \"kubernetes.io/projected/8d8f43ef-7def-4053-ada9-a9af5ece146b-kube-api-access-p2jbb\") pod \"redhat-operators-rkxdz\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.185374 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.441482 4625 generic.go:334] "Generic (PLEG): container finished" podID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerID="4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f" exitCode=0 Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.441574 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9cvk" event={"ID":"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355","Type":"ContainerDied","Data":"4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f"} Dec 02 15:00:11 crc kubenswrapper[4625]: I1202 15:00:11.690395 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rkxdz"] Dec 02 15:00:11 crc kubenswrapper[4625]: W1202 15:00:11.690577 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d8f43ef_7def_4053_ada9_a9af5ece146b.slice/crio-45ff3f8e31d4f8ef83285dc66e6bcf7f1530d2957f567a295c8a58fefdcb1f80 WatchSource:0}: Error finding container 45ff3f8e31d4f8ef83285dc66e6bcf7f1530d2957f567a295c8a58fefdcb1f80: Status 404 returned error can't find the container with id 45ff3f8e31d4f8ef83285dc66e6bcf7f1530d2957f567a295c8a58fefdcb1f80 Dec 02 15:00:12 crc kubenswrapper[4625]: I1202 15:00:12.454503 4625 generic.go:334] "Generic (PLEG): container finished" podID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerID="dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1" exitCode=0 Dec 02 15:00:12 crc kubenswrapper[4625]: I1202 15:00:12.454598 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkxdz" event={"ID":"8d8f43ef-7def-4053-ada9-a9af5ece146b","Type":"ContainerDied","Data":"dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1"} Dec 02 15:00:12 crc kubenswrapper[4625]: I1202 15:00:12.454880 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkxdz" event={"ID":"8d8f43ef-7def-4053-ada9-a9af5ece146b","Type":"ContainerStarted","Data":"45ff3f8e31d4f8ef83285dc66e6bcf7f1530d2957f567a295c8a58fefdcb1f80"} Dec 02 15:00:12 crc kubenswrapper[4625]: I1202 15:00:12.457918 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9cvk" event={"ID":"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355","Type":"ContainerStarted","Data":"3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14"} Dec 02 15:00:12 crc kubenswrapper[4625]: I1202 15:00:12.507543 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t9cvk" podStartSLOduration=3.021006833 podStartE2EDuration="5.507522043s" podCreationTimestamp="2025-12-02 15:00:07 +0000 UTC" firstStartedPulling="2025-12-02 15:00:09.417628316 +0000 UTC m=+4565.379805391" lastFinishedPulling="2025-12-02 15:00:11.904143526 +0000 UTC m=+4567.866320601" observedRunningTime="2025-12-02 15:00:12.506482674 +0000 UTC m=+4568.468659749" watchObservedRunningTime="2025-12-02 15:00:12.507522043 +0000 UTC m=+4568.469699108" Dec 02 15:00:14 crc kubenswrapper[4625]: I1202 15:00:14.483865 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkxdz" event={"ID":"8d8f43ef-7def-4053-ada9-a9af5ece146b","Type":"ContainerStarted","Data":"cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d"} Dec 02 15:00:17 crc kubenswrapper[4625]: I1202 15:00:17.526840 4625 generic.go:334] "Generic (PLEG): container finished" podID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerID="cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d" exitCode=0 Dec 02 15:00:17 crc kubenswrapper[4625]: I1202 15:00:17.526887 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkxdz" event={"ID":"8d8f43ef-7def-4053-ada9-a9af5ece146b","Type":"ContainerDied","Data":"cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d"} Dec 02 15:00:17 crc kubenswrapper[4625]: E1202 15:00:17.649582 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d8f43ef_7def_4053_ada9_a9af5ece146b.slice/crio-conmon-cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d.scope\": RecentStats: unable to find data in memory cache]" Dec 02 15:00:17 crc kubenswrapper[4625]: I1202 15:00:17.855937 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:00:17 crc kubenswrapper[4625]: E1202 15:00:17.856596 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:00:18 crc kubenswrapper[4625]: I1202 15:00:18.061489 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:18 crc kubenswrapper[4625]: I1202 15:00:18.061661 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:18 crc kubenswrapper[4625]: I1202 15:00:18.117302 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:18 crc kubenswrapper[4625]: I1202 15:00:18.540281 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkxdz" event={"ID":"8d8f43ef-7def-4053-ada9-a9af5ece146b","Type":"ContainerStarted","Data":"647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869"} Dec 02 15:00:18 crc kubenswrapper[4625]: I1202 15:00:18.579130 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rkxdz" podStartSLOduration=2.972352761 podStartE2EDuration="8.57910143s" podCreationTimestamp="2025-12-02 15:00:10 +0000 UTC" firstStartedPulling="2025-12-02 15:00:12.458110682 +0000 UTC m=+4568.420287767" lastFinishedPulling="2025-12-02 15:00:18.064859351 +0000 UTC m=+4574.027036436" observedRunningTime="2025-12-02 15:00:18.569606302 +0000 UTC m=+4574.531783377" watchObservedRunningTime="2025-12-02 15:00:18.57910143 +0000 UTC m=+4574.541278505" Dec 02 15:00:18 crc kubenswrapper[4625]: I1202 15:00:18.602942 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:20 crc kubenswrapper[4625]: I1202 15:00:20.254013 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9cvk"] Dec 02 15:00:20 crc kubenswrapper[4625]: I1202 15:00:20.565667 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t9cvk" podUID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerName="registry-server" containerID="cri-o://3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14" gracePeriod=2 Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.129222 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.186175 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.186388 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.277326 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-utilities\") pod \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.277395 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5245f\" (UniqueName: \"kubernetes.io/projected/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-kube-api-access-5245f\") pod \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.277474 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-catalog-content\") pod \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\" (UID: \"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355\") " Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.277973 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-utilities" (OuterVolumeSpecName: "utilities") pod "f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" (UID: "f9b6534e-e7a6-4d96-90a5-4d9fd6d89355"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.289582 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-kube-api-access-5245f" (OuterVolumeSpecName: "kube-api-access-5245f") pod "f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" (UID: "f9b6534e-e7a6-4d96-90a5-4d9fd6d89355"). InnerVolumeSpecName "kube-api-access-5245f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.332248 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" (UID: "f9b6534e-e7a6-4d96-90a5-4d9fd6d89355"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.380719 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.380753 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5245f\" (UniqueName: \"kubernetes.io/projected/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-kube-api-access-5245f\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.380767 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.580209 4625 generic.go:334] "Generic (PLEG): container finished" podID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerID="3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14" exitCode=0 Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.580359 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9cvk" event={"ID":"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355","Type":"ContainerDied","Data":"3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14"} Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.580623 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9cvk" event={"ID":"f9b6534e-e7a6-4d96-90a5-4d9fd6d89355","Type":"ContainerDied","Data":"2833260d7a06fedad7c585f56daad6384146e0883b143fe080f4abd5d66dec3d"} Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.580365 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9cvk" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.580663 4625 scope.go:117] "RemoveContainer" containerID="3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.611712 4625 scope.go:117] "RemoveContainer" containerID="4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.631367 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9cvk"] Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.643009 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t9cvk"] Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.643894 4625 scope.go:117] "RemoveContainer" containerID="2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.696232 4625 scope.go:117] "RemoveContainer" containerID="3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14" Dec 02 15:00:21 crc kubenswrapper[4625]: E1202 15:00:21.696808 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14\": container with ID starting with 3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14 not found: ID does not exist" containerID="3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.696841 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14"} err="failed to get container status \"3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14\": rpc error: code = NotFound desc = could not find container \"3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14\": container with ID starting with 3f891208ed8dfeb5ec87deed1464c61a30a883acedf9afd6bda229dd72b25c14 not found: ID does not exist" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.696878 4625 scope.go:117] "RemoveContainer" containerID="4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f" Dec 02 15:00:21 crc kubenswrapper[4625]: E1202 15:00:21.697343 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f\": container with ID starting with 4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f not found: ID does not exist" containerID="4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.697364 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f"} err="failed to get container status \"4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f\": rpc error: code = NotFound desc = could not find container \"4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f\": container with ID starting with 4b556ab473466dac51505edbd4ae0ad59d19030eba8e4aad5456a4d8fe96702f not found: ID does not exist" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.697377 4625 scope.go:117] "RemoveContainer" containerID="2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd" Dec 02 15:00:21 crc kubenswrapper[4625]: E1202 15:00:21.697595 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd\": container with ID starting with 2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd not found: ID does not exist" containerID="2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd" Dec 02 15:00:21 crc kubenswrapper[4625]: I1202 15:00:21.697610 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd"} err="failed to get container status \"2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd\": rpc error: code = NotFound desc = could not find container \"2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd\": container with ID starting with 2b493a54b055efdaecd55899c73eca238992f048852ae5709394a73b0ce84afd not found: ID does not exist" Dec 02 15:00:22 crc kubenswrapper[4625]: I1202 15:00:22.238063 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rkxdz" podUID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerName="registry-server" probeResult="failure" output=< Dec 02 15:00:22 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 15:00:22 crc kubenswrapper[4625]: > Dec 02 15:00:22 crc kubenswrapper[4625]: I1202 15:00:22.872472 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" path="/var/lib/kubelet/pods/f9b6534e-e7a6-4d96-90a5-4d9fd6d89355/volumes" Dec 02 15:00:30 crc kubenswrapper[4625]: I1202 15:00:30.856843 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:00:30 crc kubenswrapper[4625]: E1202 15:00:30.857922 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:00:31 crc kubenswrapper[4625]: I1202 15:00:31.330928 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:31 crc kubenswrapper[4625]: I1202 15:00:31.411362 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:31 crc kubenswrapper[4625]: I1202 15:00:31.582505 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rkxdz"] Dec 02 15:00:32 crc kubenswrapper[4625]: I1202 15:00:32.696747 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rkxdz" podUID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerName="registry-server" containerID="cri-o://647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869" gracePeriod=2 Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.251821 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.366994 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-catalog-content\") pod \"8d8f43ef-7def-4053-ada9-a9af5ece146b\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.367184 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2jbb\" (UniqueName: \"kubernetes.io/projected/8d8f43ef-7def-4053-ada9-a9af5ece146b-kube-api-access-p2jbb\") pod \"8d8f43ef-7def-4053-ada9-a9af5ece146b\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.367276 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-utilities\") pod \"8d8f43ef-7def-4053-ada9-a9af5ece146b\" (UID: \"8d8f43ef-7def-4053-ada9-a9af5ece146b\") " Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.369082 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-utilities" (OuterVolumeSpecName: "utilities") pod "8d8f43ef-7def-4053-ada9-a9af5ece146b" (UID: "8d8f43ef-7def-4053-ada9-a9af5ece146b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.379631 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d8f43ef-7def-4053-ada9-a9af5ece146b-kube-api-access-p2jbb" (OuterVolumeSpecName: "kube-api-access-p2jbb") pod "8d8f43ef-7def-4053-ada9-a9af5ece146b" (UID: "8d8f43ef-7def-4053-ada9-a9af5ece146b"). InnerVolumeSpecName "kube-api-access-p2jbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.472404 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.472443 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2jbb\" (UniqueName: \"kubernetes.io/projected/8d8f43ef-7def-4053-ada9-a9af5ece146b-kube-api-access-p2jbb\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.502952 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d8f43ef-7def-4053-ada9-a9af5ece146b" (UID: "8d8f43ef-7def-4053-ada9-a9af5ece146b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.575259 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8f43ef-7def-4053-ada9-a9af5ece146b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.709109 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkxdz" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.709094 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkxdz" event={"ID":"8d8f43ef-7def-4053-ada9-a9af5ece146b","Type":"ContainerDied","Data":"647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869"} Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.709146 4625 generic.go:334] "Generic (PLEG): container finished" podID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerID="647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869" exitCode=0 Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.710303 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rkxdz" event={"ID":"8d8f43ef-7def-4053-ada9-a9af5ece146b","Type":"ContainerDied","Data":"45ff3f8e31d4f8ef83285dc66e6bcf7f1530d2957f567a295c8a58fefdcb1f80"} Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.710263 4625 scope.go:117] "RemoveContainer" containerID="647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.765576 4625 scope.go:117] "RemoveContainer" containerID="cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.805010 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rkxdz"] Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.826301 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rkxdz"] Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.829718 4625 scope.go:117] "RemoveContainer" containerID="dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.862523 4625 scope.go:117] "RemoveContainer" containerID="647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869" Dec 02 15:00:33 crc kubenswrapper[4625]: E1202 15:00:33.871599 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869\": container with ID starting with 647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869 not found: ID does not exist" containerID="647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.871668 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869"} err="failed to get container status \"647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869\": rpc error: code = NotFound desc = could not find container \"647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869\": container with ID starting with 647b4f574e51970186b3e9b4cae666cacfa3aeb6a05999d27c94799938e1a869 not found: ID does not exist" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.871713 4625 scope.go:117] "RemoveContainer" containerID="cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d" Dec 02 15:00:33 crc kubenswrapper[4625]: E1202 15:00:33.875445 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d\": container with ID starting with cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d not found: ID does not exist" containerID="cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.875493 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d"} err="failed to get container status \"cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d\": rpc error: code = NotFound desc = could not find container \"cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d\": container with ID starting with cf942acc22ecbabe05da71e9b8f61f8694852754ebdff439656b1abab00f2a6d not found: ID does not exist" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.875523 4625 scope.go:117] "RemoveContainer" containerID="dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1" Dec 02 15:00:33 crc kubenswrapper[4625]: E1202 15:00:33.876017 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1\": container with ID starting with dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1 not found: ID does not exist" containerID="dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1" Dec 02 15:00:33 crc kubenswrapper[4625]: I1202 15:00:33.876089 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1"} err="failed to get container status \"dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1\": rpc error: code = NotFound desc = could not find container \"dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1\": container with ID starting with dac2d90e7244a9e6137e1fc4dfc01376f132663d2e47fb42f7cf758cf8224fb1 not found: ID does not exist" Dec 02 15:00:34 crc kubenswrapper[4625]: I1202 15:00:34.868518 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d8f43ef-7def-4053-ada9-a9af5ece146b" path="/var/lib/kubelet/pods/8d8f43ef-7def-4053-ada9-a9af5ece146b/volumes" Dec 02 15:00:38 crc kubenswrapper[4625]: I1202 15:00:38.408932 4625 scope.go:117] "RemoveContainer" containerID="b88c43fe6bb15f55a2c78c390bf6167e9be7c44f0a493bae006b55eb7427ba96" Dec 02 15:00:41 crc kubenswrapper[4625]: I1202 15:00:41.857432 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:00:41 crc kubenswrapper[4625]: E1202 15:00:41.858649 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:00:56 crc kubenswrapper[4625]: I1202 15:00:56.856806 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:00:56 crc kubenswrapper[4625]: E1202 15:00:56.858780 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.160386 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29411461-8xbzd"] Dec 02 15:01:00 crc kubenswrapper[4625]: E1202 15:01:00.161111 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerName="extract-content" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.161128 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerName="extract-content" Dec 02 15:01:00 crc kubenswrapper[4625]: E1202 15:01:00.161153 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerName="registry-server" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.161162 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerName="registry-server" Dec 02 15:01:00 crc kubenswrapper[4625]: E1202 15:01:00.161174 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerName="extract-utilities" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.161180 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerName="extract-utilities" Dec 02 15:01:00 crc kubenswrapper[4625]: E1202 15:01:00.161198 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerName="registry-server" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.161204 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerName="registry-server" Dec 02 15:01:00 crc kubenswrapper[4625]: E1202 15:01:00.161214 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerName="extract-content" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.161219 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerName="extract-content" Dec 02 15:01:00 crc kubenswrapper[4625]: E1202 15:01:00.161229 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerName="extract-utilities" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.161236 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerName="extract-utilities" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.161440 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b6534e-e7a6-4d96-90a5-4d9fd6d89355" containerName="registry-server" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.161453 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d8f43ef-7def-4053-ada9-a9af5ece146b" containerName="registry-server" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.162154 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.186723 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29411461-8xbzd"] Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.192087 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-config-data\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.192143 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-combined-ca-bundle\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.192170 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-fernet-keys\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.192242 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9p44\" (UniqueName: \"kubernetes.io/projected/7bb98122-d182-47de-a568-e8c5c90072fa-kube-api-access-s9p44\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.294164 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-config-data\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.294229 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-combined-ca-bundle\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.294253 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-fernet-keys\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.294352 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9p44\" (UniqueName: \"kubernetes.io/projected/7bb98122-d182-47de-a568-e8c5c90072fa-kube-api-access-s9p44\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.310330 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-combined-ca-bundle\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.311146 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-fernet-keys\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.317889 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9p44\" (UniqueName: \"kubernetes.io/projected/7bb98122-d182-47de-a568-e8c5c90072fa-kube-api-access-s9p44\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.322370 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-config-data\") pod \"keystone-cron-29411461-8xbzd\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:00 crc kubenswrapper[4625]: I1202 15:01:00.486761 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:01 crc kubenswrapper[4625]: I1202 15:01:01.019552 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29411461-8xbzd"] Dec 02 15:01:02 crc kubenswrapper[4625]: I1202 15:01:02.076981 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411461-8xbzd" event={"ID":"7bb98122-d182-47de-a568-e8c5c90072fa","Type":"ContainerStarted","Data":"13c30fcffad5b3160fec6cdbb766e9aa77d671c0ed7f183b0eb713e3f637b94f"} Dec 02 15:01:02 crc kubenswrapper[4625]: I1202 15:01:02.084562 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411461-8xbzd" event={"ID":"7bb98122-d182-47de-a568-e8c5c90072fa","Type":"ContainerStarted","Data":"c65484b381fb86a63a4cdd4c931005bcb3c60dc3776fe3fda5a9adce573f8ed9"} Dec 02 15:01:02 crc kubenswrapper[4625]: I1202 15:01:02.142654 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29411461-8xbzd" podStartSLOduration=2.142608033 podStartE2EDuration="2.142608033s" podCreationTimestamp="2025-12-02 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:01:02.105074055 +0000 UTC m=+4618.067251140" watchObservedRunningTime="2025-12-02 15:01:02.142608033 +0000 UTC m=+4618.104785108" Dec 02 15:01:06 crc kubenswrapper[4625]: I1202 15:01:06.117451 4625 generic.go:334] "Generic (PLEG): container finished" podID="7bb98122-d182-47de-a568-e8c5c90072fa" containerID="13c30fcffad5b3160fec6cdbb766e9aa77d671c0ed7f183b0eb713e3f637b94f" exitCode=0 Dec 02 15:01:06 crc kubenswrapper[4625]: I1202 15:01:06.117548 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411461-8xbzd" event={"ID":"7bb98122-d182-47de-a568-e8c5c90072fa","Type":"ContainerDied","Data":"13c30fcffad5b3160fec6cdbb766e9aa77d671c0ed7f183b0eb713e3f637b94f"} Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.585753 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.704630 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-combined-ca-bundle\") pod \"7bb98122-d182-47de-a568-e8c5c90072fa\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.704709 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-fernet-keys\") pod \"7bb98122-d182-47de-a568-e8c5c90072fa\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.704924 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9p44\" (UniqueName: \"kubernetes.io/projected/7bb98122-d182-47de-a568-e8c5c90072fa-kube-api-access-s9p44\") pod \"7bb98122-d182-47de-a568-e8c5c90072fa\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.704967 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-config-data\") pod \"7bb98122-d182-47de-a568-e8c5c90072fa\" (UID: \"7bb98122-d182-47de-a568-e8c5c90072fa\") " Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.717269 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb98122-d182-47de-a568-e8c5c90072fa-kube-api-access-s9p44" (OuterVolumeSpecName: "kube-api-access-s9p44") pod "7bb98122-d182-47de-a568-e8c5c90072fa" (UID: "7bb98122-d182-47de-a568-e8c5c90072fa"). InnerVolumeSpecName "kube-api-access-s9p44". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.717869 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7bb98122-d182-47de-a568-e8c5c90072fa" (UID: "7bb98122-d182-47de-a568-e8c5c90072fa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.749028 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bb98122-d182-47de-a568-e8c5c90072fa" (UID: "7bb98122-d182-47de-a568-e8c5c90072fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.770644 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-config-data" (OuterVolumeSpecName: "config-data") pod "7bb98122-d182-47de-a568-e8c5c90072fa" (UID: "7bb98122-d182-47de-a568-e8c5c90072fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.807810 4625 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.808228 4625 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.808242 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9p44\" (UniqueName: \"kubernetes.io/projected/7bb98122-d182-47de-a568-e8c5c90072fa-kube-api-access-s9p44\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:07.808255 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb98122-d182-47de-a568-e8c5c90072fa-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:08.144112 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29411461-8xbzd" event={"ID":"7bb98122-d182-47de-a568-e8c5c90072fa","Type":"ContainerDied","Data":"c65484b381fb86a63a4cdd4c931005bcb3c60dc3776fe3fda5a9adce573f8ed9"} Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:08.144164 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65484b381fb86a63a4cdd4c931005bcb3c60dc3776fe3fda5a9adce573f8ed9" Dec 02 15:01:08 crc kubenswrapper[4625]: I1202 15:01:08.144248 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29411461-8xbzd" Dec 02 15:01:11 crc kubenswrapper[4625]: I1202 15:01:11.857222 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:01:11 crc kubenswrapper[4625]: E1202 15:01:11.857927 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:01:23 crc kubenswrapper[4625]: I1202 15:01:23.855998 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:01:23 crc kubenswrapper[4625]: E1202 15:01:23.856862 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:01:35 crc kubenswrapper[4625]: I1202 15:01:35.857927 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:01:35 crc kubenswrapper[4625]: E1202 15:01:35.859249 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:01:49 crc kubenswrapper[4625]: I1202 15:01:49.857182 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:01:49 crc kubenswrapper[4625]: E1202 15:01:49.858105 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:02:00 crc kubenswrapper[4625]: I1202 15:02:00.857733 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:02:00 crc kubenswrapper[4625]: E1202 15:02:00.858561 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:02:14 crc kubenswrapper[4625]: I1202 15:02:14.868787 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:02:14 crc kubenswrapper[4625]: E1202 15:02:14.870573 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:02:28 crc kubenswrapper[4625]: I1202 15:02:28.856614 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:02:28 crc kubenswrapper[4625]: E1202 15:02:28.858787 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:02:43 crc kubenswrapper[4625]: I1202 15:02:43.857205 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:02:43 crc kubenswrapper[4625]: E1202 15:02:43.858702 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:02:56 crc kubenswrapper[4625]: I1202 15:02:56.856897 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:02:56 crc kubenswrapper[4625]: E1202 15:02:56.857930 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:03:10 crc kubenswrapper[4625]: I1202 15:03:10.857105 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:03:10 crc kubenswrapper[4625]: E1202 15:03:10.858772 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:03:23 crc kubenswrapper[4625]: I1202 15:03:23.856645 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:03:23 crc kubenswrapper[4625]: E1202 15:03:23.857907 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:03:34 crc kubenswrapper[4625]: I1202 15:03:34.903650 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:03:34 crc kubenswrapper[4625]: E1202 15:03:34.907241 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:03:48 crc kubenswrapper[4625]: I1202 15:03:48.869098 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:03:48 crc kubenswrapper[4625]: E1202 15:03:48.870591 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:04:01 crc kubenswrapper[4625]: I1202 15:04:01.856930 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:04:01 crc kubenswrapper[4625]: E1202 15:04:01.858005 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:04:16 crc kubenswrapper[4625]: I1202 15:04:16.856412 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:04:16 crc kubenswrapper[4625]: E1202 15:04:16.857450 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:04:28 crc kubenswrapper[4625]: I1202 15:04:28.856298 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:04:29 crc kubenswrapper[4625]: I1202 15:04:29.660863 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"1351779b7646efe271250a1266eca5822ee3d9c4190a100848bb20492041ab1d"} Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.624240 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rr8np"] Dec 02 15:04:43 crc kubenswrapper[4625]: E1202 15:04:43.627129 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb98122-d182-47de-a568-e8c5c90072fa" containerName="keystone-cron" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.627175 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb98122-d182-47de-a568-e8c5c90072fa" containerName="keystone-cron" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.627512 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bb98122-d182-47de-a568-e8c5c90072fa" containerName="keystone-cron" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.630689 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.649015 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rr8np"] Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.682290 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-catalog-content\") pod \"community-operators-rr8np\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.682709 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pp26\" (UniqueName: \"kubernetes.io/projected/d1514641-9a87-4ab2-803f-10a503aaf48c-kube-api-access-2pp26\") pod \"community-operators-rr8np\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.682863 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-utilities\") pod \"community-operators-rr8np\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.785473 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pp26\" (UniqueName: \"kubernetes.io/projected/d1514641-9a87-4ab2-803f-10a503aaf48c-kube-api-access-2pp26\") pod \"community-operators-rr8np\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.785545 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-utilities\") pod \"community-operators-rr8np\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.785712 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-catalog-content\") pod \"community-operators-rr8np\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.787024 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-catalog-content\") pod \"community-operators-rr8np\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.787040 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-utilities\") pod \"community-operators-rr8np\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.806709 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pp26\" (UniqueName: \"kubernetes.io/projected/d1514641-9a87-4ab2-803f-10a503aaf48c-kube-api-access-2pp26\") pod \"community-operators-rr8np\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:43 crc kubenswrapper[4625]: I1202 15:04:43.992881 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:44 crc kubenswrapper[4625]: I1202 15:04:44.555190 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rr8np"] Dec 02 15:04:44 crc kubenswrapper[4625]: I1202 15:04:44.895166 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr8np" event={"ID":"d1514641-9a87-4ab2-803f-10a503aaf48c","Type":"ContainerStarted","Data":"af10dfa0ceaaf2777824e6c747bdccfe393c1c538bcec0b46f6c84127509ebc3"} Dec 02 15:04:45 crc kubenswrapper[4625]: I1202 15:04:45.887994 4625 generic.go:334] "Generic (PLEG): container finished" podID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerID="7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a" exitCode=0 Dec 02 15:04:45 crc kubenswrapper[4625]: I1202 15:04:45.888265 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr8np" event={"ID":"d1514641-9a87-4ab2-803f-10a503aaf48c","Type":"ContainerDied","Data":"7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a"} Dec 02 15:04:47 crc kubenswrapper[4625]: I1202 15:04:47.918442 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr8np" event={"ID":"d1514641-9a87-4ab2-803f-10a503aaf48c","Type":"ContainerStarted","Data":"32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af"} Dec 02 15:04:48 crc kubenswrapper[4625]: I1202 15:04:48.933810 4625 generic.go:334] "Generic (PLEG): container finished" podID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerID="32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af" exitCode=0 Dec 02 15:04:48 crc kubenswrapper[4625]: I1202 15:04:48.934035 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr8np" event={"ID":"d1514641-9a87-4ab2-803f-10a503aaf48c","Type":"ContainerDied","Data":"32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af"} Dec 02 15:04:49 crc kubenswrapper[4625]: I1202 15:04:49.944706 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr8np" event={"ID":"d1514641-9a87-4ab2-803f-10a503aaf48c","Type":"ContainerStarted","Data":"a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8"} Dec 02 15:04:49 crc kubenswrapper[4625]: I1202 15:04:49.971919 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rr8np" podStartSLOduration=3.268163436 podStartE2EDuration="6.971892904s" podCreationTimestamp="2025-12-02 15:04:43 +0000 UTC" firstStartedPulling="2025-12-02 15:04:45.890440154 +0000 UTC m=+4841.852617259" lastFinishedPulling="2025-12-02 15:04:49.594169642 +0000 UTC m=+4845.556346727" observedRunningTime="2025-12-02 15:04:49.966096686 +0000 UTC m=+4845.928273761" watchObservedRunningTime="2025-12-02 15:04:49.971892904 +0000 UTC m=+4845.934069999" Dec 02 15:04:53 crc kubenswrapper[4625]: I1202 15:04:53.993087 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:53 crc kubenswrapper[4625]: I1202 15:04:53.993727 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:54 crc kubenswrapper[4625]: I1202 15:04:54.074141 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:55 crc kubenswrapper[4625]: I1202 15:04:55.086596 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:55 crc kubenswrapper[4625]: I1202 15:04:55.172711 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rr8np"] Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.024364 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rr8np" podUID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerName="registry-server" containerID="cri-o://a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8" gracePeriod=2 Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.615219 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.698352 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-catalog-content\") pod \"d1514641-9a87-4ab2-803f-10a503aaf48c\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.698841 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pp26\" (UniqueName: \"kubernetes.io/projected/d1514641-9a87-4ab2-803f-10a503aaf48c-kube-api-access-2pp26\") pod \"d1514641-9a87-4ab2-803f-10a503aaf48c\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.699048 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-utilities\") pod \"d1514641-9a87-4ab2-803f-10a503aaf48c\" (UID: \"d1514641-9a87-4ab2-803f-10a503aaf48c\") " Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.700568 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-utilities" (OuterVolumeSpecName: "utilities") pod "d1514641-9a87-4ab2-803f-10a503aaf48c" (UID: "d1514641-9a87-4ab2-803f-10a503aaf48c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.733813 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1514641-9a87-4ab2-803f-10a503aaf48c-kube-api-access-2pp26" (OuterVolumeSpecName: "kube-api-access-2pp26") pod "d1514641-9a87-4ab2-803f-10a503aaf48c" (UID: "d1514641-9a87-4ab2-803f-10a503aaf48c"). InnerVolumeSpecName "kube-api-access-2pp26". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.801721 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pp26\" (UniqueName: \"kubernetes.io/projected/d1514641-9a87-4ab2-803f-10a503aaf48c-kube-api-access-2pp26\") on node \"crc\" DevicePath \"\"" Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.801746 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.838034 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1514641-9a87-4ab2-803f-10a503aaf48c" (UID: "d1514641-9a87-4ab2-803f-10a503aaf48c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:04:57 crc kubenswrapper[4625]: I1202 15:04:57.905301 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1514641-9a87-4ab2-803f-10a503aaf48c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.038387 4625 generic.go:334] "Generic (PLEG): container finished" podID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerID="a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8" exitCode=0 Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.038442 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr8np" event={"ID":"d1514641-9a87-4ab2-803f-10a503aaf48c","Type":"ContainerDied","Data":"a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8"} Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.038501 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr8np" event={"ID":"d1514641-9a87-4ab2-803f-10a503aaf48c","Type":"ContainerDied","Data":"af10dfa0ceaaf2777824e6c747bdccfe393c1c538bcec0b46f6c84127509ebc3"} Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.038533 4625 scope.go:117] "RemoveContainer" containerID="a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.038563 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rr8np" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.094521 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rr8np"] Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.097872 4625 scope.go:117] "RemoveContainer" containerID="32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.104260 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rr8np"] Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.137044 4625 scope.go:117] "RemoveContainer" containerID="7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.177652 4625 scope.go:117] "RemoveContainer" containerID="a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8" Dec 02 15:04:58 crc kubenswrapper[4625]: E1202 15:04:58.178167 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8\": container with ID starting with a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8 not found: ID does not exist" containerID="a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.178241 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8"} err="failed to get container status \"a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8\": rpc error: code = NotFound desc = could not find container \"a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8\": container with ID starting with a47e0bf80a41ed8a4e6c5f5f2895c0ba12dd0dfe9b115fbe5eb0be74acbd7fc8 not found: ID does not exist" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.178282 4625 scope.go:117] "RemoveContainer" containerID="32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af" Dec 02 15:04:58 crc kubenswrapper[4625]: E1202 15:04:58.178712 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af\": container with ID starting with 32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af not found: ID does not exist" containerID="32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.178751 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af"} err="failed to get container status \"32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af\": rpc error: code = NotFound desc = could not find container \"32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af\": container with ID starting with 32d56aec1dce6baf6e806f6a159c1fd3165445f30a5e9d4980c9074862b376af not found: ID does not exist" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.178780 4625 scope.go:117] "RemoveContainer" containerID="7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a" Dec 02 15:04:58 crc kubenswrapper[4625]: E1202 15:04:58.179030 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a\": container with ID starting with 7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a not found: ID does not exist" containerID="7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.179060 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a"} err="failed to get container status \"7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a\": rpc error: code = NotFound desc = could not find container \"7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a\": container with ID starting with 7eac1250302a89de5bb705313c9440596bb20987320ab4df41dfc7d9df7dd27a not found: ID does not exist" Dec 02 15:04:58 crc kubenswrapper[4625]: I1202 15:04:58.872031 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1514641-9a87-4ab2-803f-10a503aaf48c" path="/var/lib/kubelet/pods/d1514641-9a87-4ab2-803f-10a503aaf48c/volumes" Dec 02 15:05:06 crc kubenswrapper[4625]: I1202 15:05:06.131207 4625 generic.go:334] "Generic (PLEG): container finished" podID="f72b183c-9a68-408e-b6b0-2accb1e96305" containerID="60529a8d3127c5f558cfac5d5bbf30b26305d57752fe5bfbc2714b6824a34fd8" exitCode=0 Dec 02 15:05:06 crc kubenswrapper[4625]: I1202 15:05:06.133569 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f72b183c-9a68-408e-b6b0-2accb1e96305","Type":"ContainerDied","Data":"60529a8d3127c5f558cfac5d5bbf30b26305d57752fe5bfbc2714b6824a34fd8"} Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.630742 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.794884 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ssh-key\") pod \"f72b183c-9a68-408e-b6b0-2accb1e96305\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.795240 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jzpw\" (UniqueName: \"kubernetes.io/projected/f72b183c-9a68-408e-b6b0-2accb1e96305-kube-api-access-8jzpw\") pod \"f72b183c-9a68-408e-b6b0-2accb1e96305\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.795329 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config-secret\") pod \"f72b183c-9a68-408e-b6b0-2accb1e96305\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.795408 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"f72b183c-9a68-408e-b6b0-2accb1e96305\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.795469 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-workdir\") pod \"f72b183c-9a68-408e-b6b0-2accb1e96305\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.795577 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-temporary\") pod \"f72b183c-9a68-408e-b6b0-2accb1e96305\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.795619 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config\") pod \"f72b183c-9a68-408e-b6b0-2accb1e96305\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.795661 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-config-data\") pod \"f72b183c-9a68-408e-b6b0-2accb1e96305\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.795796 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ca-certs\") pod \"f72b183c-9a68-408e-b6b0-2accb1e96305\" (UID: \"f72b183c-9a68-408e-b6b0-2accb1e96305\") " Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.797946 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-config-data" (OuterVolumeSpecName: "config-data") pod "f72b183c-9a68-408e-b6b0-2accb1e96305" (UID: "f72b183c-9a68-408e-b6b0-2accb1e96305"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.799078 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "f72b183c-9a68-408e-b6b0-2accb1e96305" (UID: "f72b183c-9a68-408e-b6b0-2accb1e96305"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.803044 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "f72b183c-9a68-408e-b6b0-2accb1e96305" (UID: "f72b183c-9a68-408e-b6b0-2accb1e96305"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.808376 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f72b183c-9a68-408e-b6b0-2accb1e96305-kube-api-access-8jzpw" (OuterVolumeSpecName: "kube-api-access-8jzpw") pod "f72b183c-9a68-408e-b6b0-2accb1e96305" (UID: "f72b183c-9a68-408e-b6b0-2accb1e96305"). InnerVolumeSpecName "kube-api-access-8jzpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.812607 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "f72b183c-9a68-408e-b6b0-2accb1e96305" (UID: "f72b183c-9a68-408e-b6b0-2accb1e96305"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.829461 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f72b183c-9a68-408e-b6b0-2accb1e96305" (UID: "f72b183c-9a68-408e-b6b0-2accb1e96305"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.844448 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "f72b183c-9a68-408e-b6b0-2accb1e96305" (UID: "f72b183c-9a68-408e-b6b0-2accb1e96305"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.881024 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f72b183c-9a68-408e-b6b0-2accb1e96305" (UID: "f72b183c-9a68-408e-b6b0-2accb1e96305"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.898010 4625 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.898044 4625 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.898054 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jzpw\" (UniqueName: \"kubernetes.io/projected/f72b183c-9a68-408e-b6b0-2accb1e96305-kube-api-access-8jzpw\") on node \"crc\" DevicePath \"\"" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.898068 4625 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.899726 4625 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.899755 4625 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.899770 4625 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f72b183c-9a68-408e-b6b0-2accb1e96305-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.899782 4625 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-config-data\") on node \"crc\" DevicePath \"\"" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.902132 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f72b183c-9a68-408e-b6b0-2accb1e96305" (UID: "f72b183c-9a68-408e-b6b0-2accb1e96305"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:05:07 crc kubenswrapper[4625]: I1202 15:05:07.921163 4625 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 02 15:05:08 crc kubenswrapper[4625]: I1202 15:05:08.002563 4625 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f72b183c-9a68-408e-b6b0-2accb1e96305-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 02 15:05:08 crc kubenswrapper[4625]: I1202 15:05:08.002603 4625 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 02 15:05:08 crc kubenswrapper[4625]: I1202 15:05:08.155164 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f72b183c-9a68-408e-b6b0-2accb1e96305","Type":"ContainerDied","Data":"5d4ebee3f70668e25a5fd582dbe24777a736c4a0892acfc7f34fa1d03306cfa3"} Dec 02 15:05:08 crc kubenswrapper[4625]: I1202 15:05:08.155236 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d4ebee3f70668e25a5fd582dbe24777a736c4a0892acfc7f34fa1d03306cfa3" Dec 02 15:05:08 crc kubenswrapper[4625]: I1202 15:05:08.155237 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.713812 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 02 15:05:19 crc kubenswrapper[4625]: E1202 15:05:19.714982 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerName="extract-utilities" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.714998 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerName="extract-utilities" Dec 02 15:05:19 crc kubenswrapper[4625]: E1202 15:05:19.715027 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerName="registry-server" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.715033 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerName="registry-server" Dec 02 15:05:19 crc kubenswrapper[4625]: E1202 15:05:19.715055 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerName="extract-content" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.715064 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerName="extract-content" Dec 02 15:05:19 crc kubenswrapper[4625]: E1202 15:05:19.715077 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f72b183c-9a68-408e-b6b0-2accb1e96305" containerName="tempest-tests-tempest-tests-runner" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.715083 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="f72b183c-9a68-408e-b6b0-2accb1e96305" containerName="tempest-tests-tempest-tests-runner" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.715297 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="f72b183c-9a68-408e-b6b0-2accb1e96305" containerName="tempest-tests-tempest-tests-runner" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.715350 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1514641-9a87-4ab2-803f-10a503aaf48c" containerName="registry-server" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.716160 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.719048 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-r2fq6" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.780079 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.874735 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4gsx\" (UniqueName: \"kubernetes.io/projected/3042b4b8-7047-404f-b6b2-70c510508bc6-kube-api-access-f4gsx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3042b4b8-7047-404f-b6b2-70c510508bc6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.875048 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3042b4b8-7047-404f-b6b2-70c510508bc6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.977228 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3042b4b8-7047-404f-b6b2-70c510508bc6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.978735 4625 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3042b4b8-7047-404f-b6b2-70c510508bc6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 15:05:19 crc kubenswrapper[4625]: I1202 15:05:19.979605 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4gsx\" (UniqueName: \"kubernetes.io/projected/3042b4b8-7047-404f-b6b2-70c510508bc6-kube-api-access-f4gsx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3042b4b8-7047-404f-b6b2-70c510508bc6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 15:05:20 crc kubenswrapper[4625]: I1202 15:05:20.022975 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4gsx\" (UniqueName: \"kubernetes.io/projected/3042b4b8-7047-404f-b6b2-70c510508bc6-kube-api-access-f4gsx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3042b4b8-7047-404f-b6b2-70c510508bc6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 15:05:20 crc kubenswrapper[4625]: I1202 15:05:20.057047 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3042b4b8-7047-404f-b6b2-70c510508bc6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 15:05:20 crc kubenswrapper[4625]: I1202 15:05:20.351848 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 02 15:05:20 crc kubenswrapper[4625]: I1202 15:05:20.885780 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:05:20 crc kubenswrapper[4625]: I1202 15:05:20.892398 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 02 15:05:21 crc kubenswrapper[4625]: I1202 15:05:21.299367 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3042b4b8-7047-404f-b6b2-70c510508bc6","Type":"ContainerStarted","Data":"dd0bceeff47a8d3ae05450e5d499c49ea3d37ee3ec83255e522d66daccf5dd7a"} Dec 02 15:05:22 crc kubenswrapper[4625]: I1202 15:05:22.311338 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3042b4b8-7047-404f-b6b2-70c510508bc6","Type":"ContainerStarted","Data":"f9a10e64d68227f5ece2b2875057c2e3e6bd4760b07b128c937fcd5e59af923f"} Dec 02 15:05:22 crc kubenswrapper[4625]: I1202 15:05:22.334953 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.168995714 podStartE2EDuration="3.33492814s" podCreationTimestamp="2025-12-02 15:05:19 +0000 UTC" firstStartedPulling="2025-12-02 15:05:20.885368725 +0000 UTC m=+4876.847545800" lastFinishedPulling="2025-12-02 15:05:22.051301151 +0000 UTC m=+4878.013478226" observedRunningTime="2025-12-02 15:05:22.326868481 +0000 UTC m=+4878.289045556" watchObservedRunningTime="2025-12-02 15:05:22.33492814 +0000 UTC m=+4878.297105215" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.268569 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bmmjp/must-gather-6hcpw"] Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.271018 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/must-gather-6hcpw" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.275662 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bmmjp"/"default-dockercfg-f9szj" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.275767 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bmmjp"/"kube-root-ca.crt" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.277626 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bmmjp"/"openshift-service-ca.crt" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.302616 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bmmjp/must-gather-6hcpw"] Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.339674 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8snfn\" (UniqueName: \"kubernetes.io/projected/6e7fcdb7-9584-46f2-9522-e075dc26b408-kube-api-access-8snfn\") pod \"must-gather-6hcpw\" (UID: \"6e7fcdb7-9584-46f2-9522-e075dc26b408\") " pod="openshift-must-gather-bmmjp/must-gather-6hcpw" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.339861 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e7fcdb7-9584-46f2-9522-e075dc26b408-must-gather-output\") pod \"must-gather-6hcpw\" (UID: \"6e7fcdb7-9584-46f2-9522-e075dc26b408\") " pod="openshift-must-gather-bmmjp/must-gather-6hcpw" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.442728 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e7fcdb7-9584-46f2-9522-e075dc26b408-must-gather-output\") pod \"must-gather-6hcpw\" (UID: \"6e7fcdb7-9584-46f2-9522-e075dc26b408\") " pod="openshift-must-gather-bmmjp/must-gather-6hcpw" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.442294 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e7fcdb7-9584-46f2-9522-e075dc26b408-must-gather-output\") pod \"must-gather-6hcpw\" (UID: \"6e7fcdb7-9584-46f2-9522-e075dc26b408\") " pod="openshift-must-gather-bmmjp/must-gather-6hcpw" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.442835 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8snfn\" (UniqueName: \"kubernetes.io/projected/6e7fcdb7-9584-46f2-9522-e075dc26b408-kube-api-access-8snfn\") pod \"must-gather-6hcpw\" (UID: \"6e7fcdb7-9584-46f2-9522-e075dc26b408\") " pod="openshift-must-gather-bmmjp/must-gather-6hcpw" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.461981 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8snfn\" (UniqueName: \"kubernetes.io/projected/6e7fcdb7-9584-46f2-9522-e075dc26b408-kube-api-access-8snfn\") pod \"must-gather-6hcpw\" (UID: \"6e7fcdb7-9584-46f2-9522-e075dc26b408\") " pod="openshift-must-gather-bmmjp/must-gather-6hcpw" Dec 02 15:05:45 crc kubenswrapper[4625]: I1202 15:05:45.600433 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/must-gather-6hcpw" Dec 02 15:05:46 crc kubenswrapper[4625]: I1202 15:05:46.119343 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bmmjp/must-gather-6hcpw"] Dec 02 15:05:46 crc kubenswrapper[4625]: I1202 15:05:46.695933 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/must-gather-6hcpw" event={"ID":"6e7fcdb7-9584-46f2-9522-e075dc26b408","Type":"ContainerStarted","Data":"cf3310658de697d9bd7944d26c7f9da3a40bedec2a32ad317f6407a704f91637"} Dec 02 15:05:52 crc kubenswrapper[4625]: I1202 15:05:52.827722 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/must-gather-6hcpw" event={"ID":"6e7fcdb7-9584-46f2-9522-e075dc26b408","Type":"ContainerStarted","Data":"bd535d254659630fc6bd7e4fdff4a62bbff7a9a9f165f1bf8b1a338a7135ea1b"} Dec 02 15:05:52 crc kubenswrapper[4625]: I1202 15:05:52.828262 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/must-gather-6hcpw" event={"ID":"6e7fcdb7-9584-46f2-9522-e075dc26b408","Type":"ContainerStarted","Data":"dc6b8c1a2edfc923a8b14693ad286662ce580969cdcc7aab11d690e7db13d37b"} Dec 02 15:05:52 crc kubenswrapper[4625]: I1202 15:05:52.857929 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bmmjp/must-gather-6hcpw" podStartSLOduration=1.93685719 podStartE2EDuration="7.85789267s" podCreationTimestamp="2025-12-02 15:05:45 +0000 UTC" firstStartedPulling="2025-12-02 15:05:46.119832374 +0000 UTC m=+4902.082009449" lastFinishedPulling="2025-12-02 15:05:52.040867854 +0000 UTC m=+4908.003044929" observedRunningTime="2025-12-02 15:05:52.853523841 +0000 UTC m=+4908.815700916" watchObservedRunningTime="2025-12-02 15:05:52.85789267 +0000 UTC m=+4908.820069785" Dec 02 15:05:57 crc kubenswrapper[4625]: I1202 15:05:57.515885 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bmmjp/crc-debug-h2ng2"] Dec 02 15:05:57 crc kubenswrapper[4625]: I1202 15:05:57.519131 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" Dec 02 15:05:57 crc kubenswrapper[4625]: I1202 15:05:57.625005 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f7g9\" (UniqueName: \"kubernetes.io/projected/7829057a-f351-4122-aa72-3deec6a283ca-kube-api-access-5f7g9\") pod \"crc-debug-h2ng2\" (UID: \"7829057a-f351-4122-aa72-3deec6a283ca\") " pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" Dec 02 15:05:57 crc kubenswrapper[4625]: I1202 15:05:57.625369 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7829057a-f351-4122-aa72-3deec6a283ca-host\") pod \"crc-debug-h2ng2\" (UID: \"7829057a-f351-4122-aa72-3deec6a283ca\") " pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" Dec 02 15:05:57 crc kubenswrapper[4625]: I1202 15:05:57.727825 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f7g9\" (UniqueName: \"kubernetes.io/projected/7829057a-f351-4122-aa72-3deec6a283ca-kube-api-access-5f7g9\") pod \"crc-debug-h2ng2\" (UID: \"7829057a-f351-4122-aa72-3deec6a283ca\") " pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" Dec 02 15:05:57 crc kubenswrapper[4625]: I1202 15:05:57.728188 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7829057a-f351-4122-aa72-3deec6a283ca-host\") pod \"crc-debug-h2ng2\" (UID: \"7829057a-f351-4122-aa72-3deec6a283ca\") " pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" Dec 02 15:05:57 crc kubenswrapper[4625]: I1202 15:05:57.728424 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7829057a-f351-4122-aa72-3deec6a283ca-host\") pod \"crc-debug-h2ng2\" (UID: \"7829057a-f351-4122-aa72-3deec6a283ca\") " pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" Dec 02 15:05:57 crc kubenswrapper[4625]: I1202 15:05:57.755704 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f7g9\" (UniqueName: \"kubernetes.io/projected/7829057a-f351-4122-aa72-3deec6a283ca-kube-api-access-5f7g9\") pod \"crc-debug-h2ng2\" (UID: \"7829057a-f351-4122-aa72-3deec6a283ca\") " pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" Dec 02 15:05:57 crc kubenswrapper[4625]: I1202 15:05:57.838982 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" Dec 02 15:05:58 crc kubenswrapper[4625]: I1202 15:05:58.903177 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" event={"ID":"7829057a-f351-4122-aa72-3deec6a283ca","Type":"ContainerStarted","Data":"7d79869fc33635dd59d0a737f016a3ff43ae1b59de89115919c3ae982764f42b"} Dec 02 15:06:13 crc kubenswrapper[4625]: I1202 15:06:13.078617 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" event={"ID":"7829057a-f351-4122-aa72-3deec6a283ca","Type":"ContainerStarted","Data":"f0f790cf84dfba65687f18734232eb072fb7daf8a072b6841223aacfee1d3567"} Dec 02 15:06:13 crc kubenswrapper[4625]: I1202 15:06:13.106191 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" podStartSLOduration=1.321016911 podStartE2EDuration="16.106167305s" podCreationTimestamp="2025-12-02 15:05:57 +0000 UTC" firstStartedPulling="2025-12-02 15:05:57.919270137 +0000 UTC m=+4913.881447212" lastFinishedPulling="2025-12-02 15:06:12.704420521 +0000 UTC m=+4928.666597606" observedRunningTime="2025-12-02 15:06:13.100177473 +0000 UTC m=+4929.062354548" watchObservedRunningTime="2025-12-02 15:06:13.106167305 +0000 UTC m=+4929.068344380" Dec 02 15:06:49 crc kubenswrapper[4625]: I1202 15:06:49.271257 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:06:49 crc kubenswrapper[4625]: I1202 15:06:49.271834 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:07:11 crc kubenswrapper[4625]: I1202 15:07:11.009815 4625 generic.go:334] "Generic (PLEG): container finished" podID="7829057a-f351-4122-aa72-3deec6a283ca" containerID="f0f790cf84dfba65687f18734232eb072fb7daf8a072b6841223aacfee1d3567" exitCode=0 Dec 02 15:07:11 crc kubenswrapper[4625]: I1202 15:07:11.009880 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" event={"ID":"7829057a-f351-4122-aa72-3deec6a283ca","Type":"ContainerDied","Data":"f0f790cf84dfba65687f18734232eb072fb7daf8a072b6841223aacfee1d3567"} Dec 02 15:07:12 crc kubenswrapper[4625]: I1202 15:07:12.149301 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" Dec 02 15:07:12 crc kubenswrapper[4625]: I1202 15:07:12.195426 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bmmjp/crc-debug-h2ng2"] Dec 02 15:07:12 crc kubenswrapper[4625]: I1202 15:07:12.204029 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bmmjp/crc-debug-h2ng2"] Dec 02 15:07:12 crc kubenswrapper[4625]: I1202 15:07:12.253825 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f7g9\" (UniqueName: \"kubernetes.io/projected/7829057a-f351-4122-aa72-3deec6a283ca-kube-api-access-5f7g9\") pod \"7829057a-f351-4122-aa72-3deec6a283ca\" (UID: \"7829057a-f351-4122-aa72-3deec6a283ca\") " Dec 02 15:07:12 crc kubenswrapper[4625]: I1202 15:07:12.253957 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7829057a-f351-4122-aa72-3deec6a283ca-host\") pod \"7829057a-f351-4122-aa72-3deec6a283ca\" (UID: \"7829057a-f351-4122-aa72-3deec6a283ca\") " Dec 02 15:07:12 crc kubenswrapper[4625]: I1202 15:07:12.254696 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7829057a-f351-4122-aa72-3deec6a283ca-host" (OuterVolumeSpecName: "host") pod "7829057a-f351-4122-aa72-3deec6a283ca" (UID: "7829057a-f351-4122-aa72-3deec6a283ca"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:07:12 crc kubenswrapper[4625]: I1202 15:07:12.270576 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7829057a-f351-4122-aa72-3deec6a283ca-kube-api-access-5f7g9" (OuterVolumeSpecName: "kube-api-access-5f7g9") pod "7829057a-f351-4122-aa72-3deec6a283ca" (UID: "7829057a-f351-4122-aa72-3deec6a283ca"). InnerVolumeSpecName "kube-api-access-5f7g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:07:12 crc kubenswrapper[4625]: I1202 15:07:12.356544 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f7g9\" (UniqueName: \"kubernetes.io/projected/7829057a-f351-4122-aa72-3deec6a283ca-kube-api-access-5f7g9\") on node \"crc\" DevicePath \"\"" Dec 02 15:07:12 crc kubenswrapper[4625]: I1202 15:07:12.356596 4625 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7829057a-f351-4122-aa72-3deec6a283ca-host\") on node \"crc\" DevicePath \"\"" Dec 02 15:07:12 crc kubenswrapper[4625]: I1202 15:07:12.879817 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7829057a-f351-4122-aa72-3deec6a283ca" path="/var/lib/kubelet/pods/7829057a-f351-4122-aa72-3deec6a283ca/volumes" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.042816 4625 scope.go:117] "RemoveContainer" containerID="f0f790cf84dfba65687f18734232eb072fb7daf8a072b6841223aacfee1d3567" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.043751 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-h2ng2" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.552520 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bmmjp/crc-debug-n28zr"] Dec 02 15:07:13 crc kubenswrapper[4625]: E1202 15:07:13.553211 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7829057a-f351-4122-aa72-3deec6a283ca" containerName="container-00" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.553235 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="7829057a-f351-4122-aa72-3deec6a283ca" containerName="container-00" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.553531 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="7829057a-f351-4122-aa72-3deec6a283ca" containerName="container-00" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.554488 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-n28zr" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.708769 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-host\") pod \"crc-debug-n28zr\" (UID: \"83f2c7ee-a494-4d0a-868e-0fc3540eaeba\") " pod="openshift-must-gather-bmmjp/crc-debug-n28zr" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.709328 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqjxw\" (UniqueName: \"kubernetes.io/projected/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-kube-api-access-fqjxw\") pod \"crc-debug-n28zr\" (UID: \"83f2c7ee-a494-4d0a-868e-0fc3540eaeba\") " pod="openshift-must-gather-bmmjp/crc-debug-n28zr" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.812255 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-host\") pod \"crc-debug-n28zr\" (UID: \"83f2c7ee-a494-4d0a-868e-0fc3540eaeba\") " pod="openshift-must-gather-bmmjp/crc-debug-n28zr" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.812381 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqjxw\" (UniqueName: \"kubernetes.io/projected/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-kube-api-access-fqjxw\") pod \"crc-debug-n28zr\" (UID: \"83f2c7ee-a494-4d0a-868e-0fc3540eaeba\") " pod="openshift-must-gather-bmmjp/crc-debug-n28zr" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.812915 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-host\") pod \"crc-debug-n28zr\" (UID: \"83f2c7ee-a494-4d0a-868e-0fc3540eaeba\") " pod="openshift-must-gather-bmmjp/crc-debug-n28zr" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.860283 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqjxw\" (UniqueName: \"kubernetes.io/projected/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-kube-api-access-fqjxw\") pod \"crc-debug-n28zr\" (UID: \"83f2c7ee-a494-4d0a-868e-0fc3540eaeba\") " pod="openshift-must-gather-bmmjp/crc-debug-n28zr" Dec 02 15:07:13 crc kubenswrapper[4625]: I1202 15:07:13.887446 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-n28zr" Dec 02 15:07:14 crc kubenswrapper[4625]: I1202 15:07:14.059017 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/crc-debug-n28zr" event={"ID":"83f2c7ee-a494-4d0a-868e-0fc3540eaeba","Type":"ContainerStarted","Data":"a2d557af33e820a76a52006a5e0936885d67917b2260881cb86b84e20d7ce1f5"} Dec 02 15:07:15 crc kubenswrapper[4625]: I1202 15:07:15.081502 4625 generic.go:334] "Generic (PLEG): container finished" podID="83f2c7ee-a494-4d0a-868e-0fc3540eaeba" containerID="9d1a7508198113f9df36cc33658a8fec6801e614959c3b44912364400007d2f1" exitCode=0 Dec 02 15:07:15 crc kubenswrapper[4625]: I1202 15:07:15.081562 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/crc-debug-n28zr" event={"ID":"83f2c7ee-a494-4d0a-868e-0fc3540eaeba","Type":"ContainerDied","Data":"9d1a7508198113f9df36cc33658a8fec6801e614959c3b44912364400007d2f1"} Dec 02 15:07:16 crc kubenswrapper[4625]: I1202 15:07:16.215381 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-n28zr" Dec 02 15:07:16 crc kubenswrapper[4625]: I1202 15:07:16.374821 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-host\") pod \"83f2c7ee-a494-4d0a-868e-0fc3540eaeba\" (UID: \"83f2c7ee-a494-4d0a-868e-0fc3540eaeba\") " Dec 02 15:07:16 crc kubenswrapper[4625]: I1202 15:07:16.375029 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqjxw\" (UniqueName: \"kubernetes.io/projected/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-kube-api-access-fqjxw\") pod \"83f2c7ee-a494-4d0a-868e-0fc3540eaeba\" (UID: \"83f2c7ee-a494-4d0a-868e-0fc3540eaeba\") " Dec 02 15:07:16 crc kubenswrapper[4625]: I1202 15:07:16.375242 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-host" (OuterVolumeSpecName: "host") pod "83f2c7ee-a494-4d0a-868e-0fc3540eaeba" (UID: "83f2c7ee-a494-4d0a-868e-0fc3540eaeba"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:07:16 crc kubenswrapper[4625]: I1202 15:07:16.375832 4625 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-host\") on node \"crc\" DevicePath \"\"" Dec 02 15:07:16 crc kubenswrapper[4625]: I1202 15:07:16.400617 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-kube-api-access-fqjxw" (OuterVolumeSpecName: "kube-api-access-fqjxw") pod "83f2c7ee-a494-4d0a-868e-0fc3540eaeba" (UID: "83f2c7ee-a494-4d0a-868e-0fc3540eaeba"). InnerVolumeSpecName "kube-api-access-fqjxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:07:16 crc kubenswrapper[4625]: I1202 15:07:16.479560 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqjxw\" (UniqueName: \"kubernetes.io/projected/83f2c7ee-a494-4d0a-868e-0fc3540eaeba-kube-api-access-fqjxw\") on node \"crc\" DevicePath \"\"" Dec 02 15:07:17 crc kubenswrapper[4625]: I1202 15:07:17.106199 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/crc-debug-n28zr" event={"ID":"83f2c7ee-a494-4d0a-868e-0fc3540eaeba","Type":"ContainerDied","Data":"a2d557af33e820a76a52006a5e0936885d67917b2260881cb86b84e20d7ce1f5"} Dec 02 15:07:17 crc kubenswrapper[4625]: I1202 15:07:17.106279 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2d557af33e820a76a52006a5e0936885d67917b2260881cb86b84e20d7ce1f5" Dec 02 15:07:17 crc kubenswrapper[4625]: I1202 15:07:17.106363 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-n28zr" Dec 02 15:07:17 crc kubenswrapper[4625]: I1202 15:07:17.586817 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bmmjp/crc-debug-n28zr"] Dec 02 15:07:17 crc kubenswrapper[4625]: I1202 15:07:17.647874 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bmmjp/crc-debug-n28zr"] Dec 02 15:07:18 crc kubenswrapper[4625]: I1202 15:07:18.796487 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bmmjp/crc-debug-vx976"] Dec 02 15:07:18 crc kubenswrapper[4625]: E1202 15:07:18.797083 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f2c7ee-a494-4d0a-868e-0fc3540eaeba" containerName="container-00" Dec 02 15:07:18 crc kubenswrapper[4625]: I1202 15:07:18.797102 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f2c7ee-a494-4d0a-868e-0fc3540eaeba" containerName="container-00" Dec 02 15:07:18 crc kubenswrapper[4625]: I1202 15:07:18.797360 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="83f2c7ee-a494-4d0a-868e-0fc3540eaeba" containerName="container-00" Dec 02 15:07:18 crc kubenswrapper[4625]: I1202 15:07:18.798634 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-vx976" Dec 02 15:07:18 crc kubenswrapper[4625]: I1202 15:07:18.868013 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f2c7ee-a494-4d0a-868e-0fc3540eaeba" path="/var/lib/kubelet/pods/83f2c7ee-a494-4d0a-868e-0fc3540eaeba/volumes" Dec 02 15:07:18 crc kubenswrapper[4625]: I1202 15:07:18.956001 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/284ffb3e-47ee-460c-9c95-0ffc711c703d-host\") pod \"crc-debug-vx976\" (UID: \"284ffb3e-47ee-460c-9c95-0ffc711c703d\") " pod="openshift-must-gather-bmmjp/crc-debug-vx976" Dec 02 15:07:18 crc kubenswrapper[4625]: I1202 15:07:18.956631 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b844\" (UniqueName: \"kubernetes.io/projected/284ffb3e-47ee-460c-9c95-0ffc711c703d-kube-api-access-8b844\") pod \"crc-debug-vx976\" (UID: \"284ffb3e-47ee-460c-9c95-0ffc711c703d\") " pod="openshift-must-gather-bmmjp/crc-debug-vx976" Dec 02 15:07:19 crc kubenswrapper[4625]: I1202 15:07:19.058528 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/284ffb3e-47ee-460c-9c95-0ffc711c703d-host\") pod \"crc-debug-vx976\" (UID: \"284ffb3e-47ee-460c-9c95-0ffc711c703d\") " pod="openshift-must-gather-bmmjp/crc-debug-vx976" Dec 02 15:07:19 crc kubenswrapper[4625]: I1202 15:07:19.058698 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b844\" (UniqueName: \"kubernetes.io/projected/284ffb3e-47ee-460c-9c95-0ffc711c703d-kube-api-access-8b844\") pod \"crc-debug-vx976\" (UID: \"284ffb3e-47ee-460c-9c95-0ffc711c703d\") " pod="openshift-must-gather-bmmjp/crc-debug-vx976" Dec 02 15:07:19 crc kubenswrapper[4625]: I1202 15:07:19.058777 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/284ffb3e-47ee-460c-9c95-0ffc711c703d-host\") pod \"crc-debug-vx976\" (UID: \"284ffb3e-47ee-460c-9c95-0ffc711c703d\") " pod="openshift-must-gather-bmmjp/crc-debug-vx976" Dec 02 15:07:19 crc kubenswrapper[4625]: I1202 15:07:19.104405 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b844\" (UniqueName: \"kubernetes.io/projected/284ffb3e-47ee-460c-9c95-0ffc711c703d-kube-api-access-8b844\") pod \"crc-debug-vx976\" (UID: \"284ffb3e-47ee-460c-9c95-0ffc711c703d\") " pod="openshift-must-gather-bmmjp/crc-debug-vx976" Dec 02 15:07:19 crc kubenswrapper[4625]: I1202 15:07:19.116656 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-vx976" Dec 02 15:07:19 crc kubenswrapper[4625]: I1202 15:07:19.271172 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:07:19 crc kubenswrapper[4625]: I1202 15:07:19.271259 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:07:20 crc kubenswrapper[4625]: I1202 15:07:20.147109 4625 generic.go:334] "Generic (PLEG): container finished" podID="284ffb3e-47ee-460c-9c95-0ffc711c703d" containerID="701c721cd5a51583717358bfbf430be9843cd5a54e527c5ec6df7f918a044027" exitCode=0 Dec 02 15:07:20 crc kubenswrapper[4625]: I1202 15:07:20.147414 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/crc-debug-vx976" event={"ID":"284ffb3e-47ee-460c-9c95-0ffc711c703d","Type":"ContainerDied","Data":"701c721cd5a51583717358bfbf430be9843cd5a54e527c5ec6df7f918a044027"} Dec 02 15:07:20 crc kubenswrapper[4625]: I1202 15:07:20.149374 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/crc-debug-vx976" event={"ID":"284ffb3e-47ee-460c-9c95-0ffc711c703d","Type":"ContainerStarted","Data":"c808c2204a66823f3f0970b6bb267c086768963236acf6063e26074770b83ef7"} Dec 02 15:07:20 crc kubenswrapper[4625]: I1202 15:07:20.196923 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bmmjp/crc-debug-vx976"] Dec 02 15:07:20 crc kubenswrapper[4625]: I1202 15:07:20.212940 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bmmjp/crc-debug-vx976"] Dec 02 15:07:21 crc kubenswrapper[4625]: I1202 15:07:21.487872 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-vx976" Dec 02 15:07:21 crc kubenswrapper[4625]: I1202 15:07:21.522033 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/284ffb3e-47ee-460c-9c95-0ffc711c703d-host\") pod \"284ffb3e-47ee-460c-9c95-0ffc711c703d\" (UID: \"284ffb3e-47ee-460c-9c95-0ffc711c703d\") " Dec 02 15:07:21 crc kubenswrapper[4625]: I1202 15:07:21.522229 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b844\" (UniqueName: \"kubernetes.io/projected/284ffb3e-47ee-460c-9c95-0ffc711c703d-kube-api-access-8b844\") pod \"284ffb3e-47ee-460c-9c95-0ffc711c703d\" (UID: \"284ffb3e-47ee-460c-9c95-0ffc711c703d\") " Dec 02 15:07:21 crc kubenswrapper[4625]: I1202 15:07:21.523529 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/284ffb3e-47ee-460c-9c95-0ffc711c703d-host" (OuterVolumeSpecName: "host") pod "284ffb3e-47ee-460c-9c95-0ffc711c703d" (UID: "284ffb3e-47ee-460c-9c95-0ffc711c703d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:07:21 crc kubenswrapper[4625]: I1202 15:07:21.530448 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284ffb3e-47ee-460c-9c95-0ffc711c703d-kube-api-access-8b844" (OuterVolumeSpecName: "kube-api-access-8b844") pod "284ffb3e-47ee-460c-9c95-0ffc711c703d" (UID: "284ffb3e-47ee-460c-9c95-0ffc711c703d"). InnerVolumeSpecName "kube-api-access-8b844". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:07:21 crc kubenswrapper[4625]: I1202 15:07:21.624177 4625 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/284ffb3e-47ee-460c-9c95-0ffc711c703d-host\") on node \"crc\" DevicePath \"\"" Dec 02 15:07:21 crc kubenswrapper[4625]: I1202 15:07:21.624221 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b844\" (UniqueName: \"kubernetes.io/projected/284ffb3e-47ee-460c-9c95-0ffc711c703d-kube-api-access-8b844\") on node \"crc\" DevicePath \"\"" Dec 02 15:07:22 crc kubenswrapper[4625]: I1202 15:07:22.357470 4625 scope.go:117] "RemoveContainer" containerID="701c721cd5a51583717358bfbf430be9843cd5a54e527c5ec6df7f918a044027" Dec 02 15:07:22 crc kubenswrapper[4625]: I1202 15:07:22.357551 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/crc-debug-vx976" Dec 02 15:07:22 crc kubenswrapper[4625]: I1202 15:07:22.869084 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284ffb3e-47ee-460c-9c95-0ffc711c703d" path="/var/lib/kubelet/pods/284ffb3e-47ee-460c-9c95-0ffc711c703d/volumes" Dec 02 15:07:43 crc kubenswrapper[4625]: I1202 15:07:43.884769 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f988fb4d-rk87d_fc88c0ad-8893-4168-bf0c-e9ed829f1b62/barbican-api/0.log" Dec 02 15:07:44 crc kubenswrapper[4625]: I1202 15:07:44.112727 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f988fb4d-rk87d_fc88c0ad-8893-4168-bf0c-e9ed829f1b62/barbican-api-log/0.log" Dec 02 15:07:44 crc kubenswrapper[4625]: I1202 15:07:44.121478 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d57b47bd4-2hxfs_ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d/barbican-keystone-listener/0.log" Dec 02 15:07:44 crc kubenswrapper[4625]: I1202 15:07:44.278195 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d57b47bd4-2hxfs_ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d/barbican-keystone-listener-log/0.log" Dec 02 15:07:44 crc kubenswrapper[4625]: I1202 15:07:44.411557 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68c4cddcdc-kxpt7_183dcad1-443e-47e0-bc13-d98d7c316069/barbican-worker/0.log" Dec 02 15:07:44 crc kubenswrapper[4625]: I1202 15:07:44.422018 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68c4cddcdc-kxpt7_183dcad1-443e-47e0-bc13-d98d7c316069/barbican-worker-log/0.log" Dec 02 15:07:44 crc kubenswrapper[4625]: I1202 15:07:44.747048 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb_b31c21d6-4087-4521-8566-14b2eeabb679/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:44 crc kubenswrapper[4625]: I1202 15:07:44.836260 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3bd330e7-048e-4237-a165-25f8c3bf6bc3/ceilometer-central-agent/0.log" Dec 02 15:07:44 crc kubenswrapper[4625]: I1202 15:07:44.992910 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3bd330e7-048e-4237-a165-25f8c3bf6bc3/ceilometer-notification-agent/0.log" Dec 02 15:07:45 crc kubenswrapper[4625]: I1202 15:07:45.071542 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3bd330e7-048e-4237-a165-25f8c3bf6bc3/sg-core/0.log" Dec 02 15:07:45 crc kubenswrapper[4625]: I1202 15:07:45.086073 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3bd330e7-048e-4237-a165-25f8c3bf6bc3/proxy-httpd/0.log" Dec 02 15:07:45 crc kubenswrapper[4625]: I1202 15:07:45.313652 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d2c435c-5496-4ec7-ac3f-eab4e5728204/cinder-api/0.log" Dec 02 15:07:45 crc kubenswrapper[4625]: I1202 15:07:45.383067 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d2c435c-5496-4ec7-ac3f-eab4e5728204/cinder-api-log/0.log" Dec 02 15:07:45 crc kubenswrapper[4625]: I1202 15:07:45.537735 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9a4032f9-0bbb-4491-9f59-8b6006133dd6/cinder-scheduler/0.log" Dec 02 15:07:45 crc kubenswrapper[4625]: I1202 15:07:45.600134 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9a4032f9-0bbb-4491-9f59-8b6006133dd6/probe/0.log" Dec 02 15:07:45 crc kubenswrapper[4625]: I1202 15:07:45.941531 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw_4ab12756-db3d-4271-9017-d059eb68113e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:46 crc kubenswrapper[4625]: I1202 15:07:46.391362 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm_acb5fca1-1ef0-4678-8323-5a42790a0998/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:46 crc kubenswrapper[4625]: I1202 15:07:46.611522 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54ffdb7d8c-skjm6_5908f5de-9af5-4cde-abf8-5959a6c8648e/init/0.log" Dec 02 15:07:46 crc kubenswrapper[4625]: I1202 15:07:46.930605 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54ffdb7d8c-skjm6_5908f5de-9af5-4cde-abf8-5959a6c8648e/init/0.log" Dec 02 15:07:46 crc kubenswrapper[4625]: I1202 15:07:46.935686 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg_1b3affb2-6aa8-445a-81cd-6bdb90c31f45/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:46 crc kubenswrapper[4625]: I1202 15:07:46.987789 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54ffdb7d8c-skjm6_5908f5de-9af5-4cde-abf8-5959a6c8648e/dnsmasq-dns/0.log" Dec 02 15:07:47 crc kubenswrapper[4625]: I1202 15:07:47.302085 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73/glance-log/0.log" Dec 02 15:07:47 crc kubenswrapper[4625]: I1202 15:07:47.312389 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73/glance-httpd/0.log" Dec 02 15:07:47 crc kubenswrapper[4625]: I1202 15:07:47.573292 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7831eff5-dd90-4e3d-b6ec-86ec291099f2/glance-httpd/0.log" Dec 02 15:07:47 crc kubenswrapper[4625]: I1202 15:07:47.611278 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7831eff5-dd90-4e3d-b6ec-86ec291099f2/glance-log/0.log" Dec 02 15:07:47 crc kubenswrapper[4625]: I1202 15:07:47.848103 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7dc4db5bfb-zbs4l_92339196-3d33-4b76-9ba2-81e1a8373e84/horizon/1.log" Dec 02 15:07:47 crc kubenswrapper[4625]: I1202 15:07:47.981504 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7dc4db5bfb-zbs4l_92339196-3d33-4b76-9ba2-81e1a8373e84/horizon/0.log" Dec 02 15:07:48 crc kubenswrapper[4625]: I1202 15:07:48.200852 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7_a7c36e4d-5e3c-4036-abef-01a4eb799665/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:48 crc kubenswrapper[4625]: I1202 15:07:48.366764 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ttw4p_6b962e31-3a28-4083-af18-2c1b6f53b3b3/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:48 crc kubenswrapper[4625]: I1202 15:07:48.521752 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7dc4db5bfb-zbs4l_92339196-3d33-4b76-9ba2-81e1a8373e84/horizon-log/0.log" Dec 02 15:07:48 crc kubenswrapper[4625]: I1202 15:07:48.739571 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29411461-8xbzd_7bb98122-d182-47de-a568-e8c5c90072fa/keystone-cron/0.log" Dec 02 15:07:48 crc kubenswrapper[4625]: I1202 15:07:48.883130 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_34ec415c-0f48-4b5b-98f6-6f854c2910ee/kube-state-metrics/0.log" Dec 02 15:07:49 crc kubenswrapper[4625]: I1202 15:07:49.036226 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6f7c78dbd6-lsbdb_f5f7b2e0-20a9-42b2-b323-4c813153f09f/keystone-api/0.log" Dec 02 15:07:49 crc kubenswrapper[4625]: I1202 15:07:49.155901 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m_62d61250-750b-4a2d-b2d6-a5f1b4914da4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:49 crc kubenswrapper[4625]: I1202 15:07:49.272036 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:07:49 crc kubenswrapper[4625]: I1202 15:07:49.272152 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:07:49 crc kubenswrapper[4625]: I1202 15:07:49.272262 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 15:07:49 crc kubenswrapper[4625]: I1202 15:07:49.273661 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1351779b7646efe271250a1266eca5822ee3d9c4190a100848bb20492041ab1d"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:07:49 crc kubenswrapper[4625]: I1202 15:07:49.273744 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://1351779b7646efe271250a1266eca5822ee3d9c4190a100848bb20492041ab1d" gracePeriod=600 Dec 02 15:07:49 crc kubenswrapper[4625]: I1202 15:07:49.776296 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="1351779b7646efe271250a1266eca5822ee3d9c4190a100848bb20492041ab1d" exitCode=0 Dec 02 15:07:49 crc kubenswrapper[4625]: I1202 15:07:49.776491 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"1351779b7646efe271250a1266eca5822ee3d9c4190a100848bb20492041ab1d"} Dec 02 15:07:49 crc kubenswrapper[4625]: I1202 15:07:49.776771 4625 scope.go:117] "RemoveContainer" containerID="73cab440f92f074a9c6eeb4ab7f5cfc24afec2ad814862b4c502b196f6abf633" Dec 02 15:07:50 crc kubenswrapper[4625]: I1202 15:07:50.049998 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b_4a01556d-36c3-4d01-9c45-faccb3941b62/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:50 crc kubenswrapper[4625]: I1202 15:07:50.348286 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8c746598f-ss7rg_f4477e45-6d29-4717-9168-8cf291295a40/neutron-httpd/0.log" Dec 02 15:07:50 crc kubenswrapper[4625]: I1202 15:07:50.639050 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8c746598f-ss7rg_f4477e45-6d29-4717-9168-8cf291295a40/neutron-api/0.log" Dec 02 15:07:50 crc kubenswrapper[4625]: I1202 15:07:50.795133 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca"} Dec 02 15:07:51 crc kubenswrapper[4625]: I1202 15:07:51.439757 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cc85f464-74b0-41f9-9997-21e67c3c7e3a/nova-cell0-conductor-conductor/0.log" Dec 02 15:07:51 crc kubenswrapper[4625]: I1202 15:07:51.572701 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c02f586b-acea-434e-9258-d9cd407b3595/nova-cell1-conductor-conductor/0.log" Dec 02 15:07:52 crc kubenswrapper[4625]: I1202 15:07:52.035501 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ae13d171-d7b4-4f87-b94b-b19de24b35b6/nova-cell1-novncproxy-novncproxy/0.log" Dec 02 15:07:52 crc kubenswrapper[4625]: I1202 15:07:52.385942 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1800930c-5ef6-4a3e-8a80-df933d636e5b/nova-api-log/0.log" Dec 02 15:07:52 crc kubenswrapper[4625]: I1202 15:07:52.454875 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-qxsfq_c531f95a-508b-48ea-bfb7-91659bd6df10/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:52 crc kubenswrapper[4625]: I1202 15:07:52.565609 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1800930c-5ef6-4a3e-8a80-df933d636e5b/nova-api-api/0.log" Dec 02 15:07:52 crc kubenswrapper[4625]: I1202 15:07:52.732021 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0994edfd-9799-4974-8f7e-eb4cf312a370/nova-metadata-log/0.log" Dec 02 15:07:53 crc kubenswrapper[4625]: I1202 15:07:53.186776 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_266c6414-c5b8-4dd2-939d-2386a0756d9c/mysql-bootstrap/0.log" Dec 02 15:07:53 crc kubenswrapper[4625]: I1202 15:07:53.466513 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_266c6414-c5b8-4dd2-939d-2386a0756d9c/galera/0.log" Dec 02 15:07:53 crc kubenswrapper[4625]: I1202 15:07:53.477959 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_266c6414-c5b8-4dd2-939d-2386a0756d9c/mysql-bootstrap/0.log" Dec 02 15:07:53 crc kubenswrapper[4625]: I1202 15:07:53.663729 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a/nova-scheduler-scheduler/0.log" Dec 02 15:07:53 crc kubenswrapper[4625]: I1202 15:07:53.729779 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e108301-d560-49b4-a4b2-a2f45c2fa8fd/mysql-bootstrap/0.log" Dec 02 15:07:54 crc kubenswrapper[4625]: I1202 15:07:54.130154 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e108301-d560-49b4-a4b2-a2f45c2fa8fd/mysql-bootstrap/0.log" Dec 02 15:07:54 crc kubenswrapper[4625]: I1202 15:07:54.154031 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e108301-d560-49b4-a4b2-a2f45c2fa8fd/galera/0.log" Dec 02 15:07:54 crc kubenswrapper[4625]: I1202 15:07:54.427410 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7e342617-f071-4967-a02d-38534c2c7c11/openstackclient/0.log" Dec 02 15:07:54 crc kubenswrapper[4625]: I1202 15:07:54.573340 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5hzbv_3fe58841-9566-4a48-9e44-6709020a943c/ovn-controller/0.log" Dec 02 15:07:54 crc kubenswrapper[4625]: I1202 15:07:54.787966 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ph9m2_2bef51af-8aba-4e71-a607-69b0e7facae6/openstack-network-exporter/0.log" Dec 02 15:07:55 crc kubenswrapper[4625]: I1202 15:07:55.052349 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bqzbz_0be922f6-018c-4504-bc6a-f0c8dd53ce5b/ovsdb-server-init/0.log" Dec 02 15:07:55 crc kubenswrapper[4625]: I1202 15:07:55.132466 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0994edfd-9799-4974-8f7e-eb4cf312a370/nova-metadata-metadata/0.log" Dec 02 15:07:55 crc kubenswrapper[4625]: I1202 15:07:55.592030 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bqzbz_0be922f6-018c-4504-bc6a-f0c8dd53ce5b/ovsdb-server-init/0.log" Dec 02 15:07:55 crc kubenswrapper[4625]: I1202 15:07:55.733136 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bqzbz_0be922f6-018c-4504-bc6a-f0c8dd53ce5b/ovsdb-server/0.log" Dec 02 15:07:55 crc kubenswrapper[4625]: I1202 15:07:55.836255 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bqzbz_0be922f6-018c-4504-bc6a-f0c8dd53ce5b/ovs-vswitchd/0.log" Dec 02 15:07:55 crc kubenswrapper[4625]: I1202 15:07:55.959385 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-s5gxh_9505225f-1412-45d6-96f3-27b3ab5c35c1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:56 crc kubenswrapper[4625]: I1202 15:07:56.030489 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f8bda2bc-c054-4188-ad43-47b49dab4949/openstack-network-exporter/0.log" Dec 02 15:07:56 crc kubenswrapper[4625]: I1202 15:07:56.201399 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f8bda2bc-c054-4188-ad43-47b49dab4949/ovn-northd/0.log" Dec 02 15:07:56 crc kubenswrapper[4625]: I1202 15:07:56.248557 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8f8269cf-38ac-4207-be57-909e352cb528/memcached/0.log" Dec 02 15:07:56 crc kubenswrapper[4625]: I1202 15:07:56.364836 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_31013ff5-b8be-40fb-9e34-5eac74bd1849/ovsdbserver-nb/0.log" Dec 02 15:07:56 crc kubenswrapper[4625]: I1202 15:07:56.377043 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_31013ff5-b8be-40fb-9e34-5eac74bd1849/openstack-network-exporter/0.log" Dec 02 15:07:56 crc kubenswrapper[4625]: I1202 15:07:56.596751 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3295e090-e4c0-4c88-a3aa-9d938e0b541d/openstack-network-exporter/0.log" Dec 02 15:07:56 crc kubenswrapper[4625]: I1202 15:07:56.631833 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3295e090-e4c0-4c88-a3aa-9d938e0b541d/ovsdbserver-sb/0.log" Dec 02 15:07:56 crc kubenswrapper[4625]: I1202 15:07:56.763126 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5855fb4fd-8xvmf_dc3dac2b-e3ca-4fde-b347-598e80af89ce/placement-api/0.log" Dec 02 15:07:56 crc kubenswrapper[4625]: I1202 15:07:56.975082 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5855fb4fd-8xvmf_dc3dac2b-e3ca-4fde-b347-598e80af89ce/placement-log/0.log" Dec 02 15:07:57 crc kubenswrapper[4625]: I1202 15:07:57.012417 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_50ba9ca8-e722-4c48-9435-a358d35a893e/setup-container/0.log" Dec 02 15:07:57 crc kubenswrapper[4625]: I1202 15:07:57.250970 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_50ba9ca8-e722-4c48-9435-a358d35a893e/rabbitmq/0.log" Dec 02 15:07:57 crc kubenswrapper[4625]: I1202 15:07:57.279196 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_50ba9ca8-e722-4c48-9435-a358d35a893e/setup-container/0.log" Dec 02 15:07:57 crc kubenswrapper[4625]: I1202 15:07:57.309146 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5eb1d307-4690-436e-8f82-a27eff014c84/setup-container/0.log" Dec 02 15:07:57 crc kubenswrapper[4625]: I1202 15:07:57.504684 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5eb1d307-4690-436e-8f82-a27eff014c84/setup-container/0.log" Dec 02 15:07:57 crc kubenswrapper[4625]: I1202 15:07:57.577131 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4_abdce099-8a70-4557-860e-379c32fd5d6c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:57 crc kubenswrapper[4625]: I1202 15:07:57.591401 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5eb1d307-4690-436e-8f82-a27eff014c84/rabbitmq/0.log" Dec 02 15:07:57 crc kubenswrapper[4625]: I1202 15:07:57.820677 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4d7z6_1062a08b-7d27-49af-bc24-3d8aae739f10/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:57 crc kubenswrapper[4625]: I1202 15:07:57.935676 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2_d28486df-5cbd-4cf1-ab77-3bb7c4582d36/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:57 crc kubenswrapper[4625]: I1202 15:07:57.966010 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-g8jql_bfdfc7da-d385-4c7c-8e45-fd36703b7fb6/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:58 crc kubenswrapper[4625]: I1202 15:07:58.115667 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-76psc_20d1bcc5-faf7-4265-bdd8-f471a4d449cd/ssh-known-hosts-edpm-deployment/0.log" Dec 02 15:07:58 crc kubenswrapper[4625]: I1202 15:07:58.256745 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6fb4775b59-xb9rg_afdaf455-8ee8-42c2-8086-305834a075a5/proxy-server/0.log" Dec 02 15:07:58 crc kubenswrapper[4625]: I1202 15:07:58.486088 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6fb4775b59-xb9rg_afdaf455-8ee8-42c2-8086-305834a075a5/proxy-httpd/0.log" Dec 02 15:07:58 crc kubenswrapper[4625]: I1202 15:07:58.542210 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9hhrz_b5858ebe-f677-4a48-b729-a8c4023b346d/swift-ring-rebalance/0.log" Dec 02 15:07:58 crc kubenswrapper[4625]: I1202 15:07:58.674634 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/account-auditor/0.log" Dec 02 15:07:58 crc kubenswrapper[4625]: I1202 15:07:58.796642 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/account-reaper/0.log" Dec 02 15:07:58 crc kubenswrapper[4625]: I1202 15:07:58.907871 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/account-replicator/0.log" Dec 02 15:07:58 crc kubenswrapper[4625]: I1202 15:07:58.965401 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/container-auditor/0.log" Dec 02 15:07:58 crc kubenswrapper[4625]: I1202 15:07:58.982030 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/account-server/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.055386 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/container-replicator/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.060438 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/container-server/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.189073 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/container-updater/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.244196 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/object-auditor/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.311556 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/object-replicator/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.318588 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/object-expirer/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.368038 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/object-server/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.434051 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/object-updater/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.488524 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/rsync/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.523070 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/swift-recon-cron/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.639694 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5_01572dfd-9cb1-4c55-90fc-759a859f60e4/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.741924 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f72b183c-9a68-408e-b6b0-2accb1e96305/tempest-tests-tempest-tests-runner/0.log" Dec 02 15:07:59 crc kubenswrapper[4625]: I1202 15:07:59.866240 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3042b4b8-7047-404f-b6b2-70c510508bc6/test-operator-logs-container/0.log" Dec 02 15:08:00 crc kubenswrapper[4625]: I1202 15:08:00.369569 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm_4b2123a9-3349-49ed-a533-b0550d7babc0/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:08:31 crc kubenswrapper[4625]: I1202 15:08:31.449876 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/util/0.log" Dec 02 15:08:31 crc kubenswrapper[4625]: I1202 15:08:31.769726 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/util/0.log" Dec 02 15:08:31 crc kubenswrapper[4625]: I1202 15:08:31.784054 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/pull/0.log" Dec 02 15:08:31 crc kubenswrapper[4625]: I1202 15:08:31.784249 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/pull/0.log" Dec 02 15:08:32 crc kubenswrapper[4625]: I1202 15:08:32.062485 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/util/0.log" Dec 02 15:08:32 crc kubenswrapper[4625]: I1202 15:08:32.202999 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/extract/0.log" Dec 02 15:08:32 crc kubenswrapper[4625]: I1202 15:08:32.208170 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/pull/0.log" Dec 02 15:08:32 crc kubenswrapper[4625]: I1202 15:08:32.345839 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vck47_a1bf70dd-f5d1-45a9-94a9-86fffb0758b2/kube-rbac-proxy/0.log" Dec 02 15:08:32 crc kubenswrapper[4625]: I1202 15:08:32.498350 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55f4dbb9b7-bhlt2_95cd9233-3d9c-45e1-ade0-6753a952b721/kube-rbac-proxy/0.log" Dec 02 15:08:32 crc kubenswrapper[4625]: I1202 15:08:32.514850 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vck47_a1bf70dd-f5d1-45a9-94a9-86fffb0758b2/manager/0.log" Dec 02 15:08:32 crc kubenswrapper[4625]: I1202 15:08:32.667670 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55f4dbb9b7-bhlt2_95cd9233-3d9c-45e1-ade0-6753a952b721/manager/0.log" Dec 02 15:08:32 crc kubenswrapper[4625]: I1202 15:08:32.790746 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-kl65q_d20c6701-017d-4f33-91f0-10199890032f/kube-rbac-proxy/0.log" Dec 02 15:08:32 crc kubenswrapper[4625]: I1202 15:08:32.826222 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-kl65q_d20c6701-017d-4f33-91f0-10199890032f/manager/0.log" Dec 02 15:08:33 crc kubenswrapper[4625]: I1202 15:08:33.101550 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-q854l_15504c82-ed79-4ab3-a157-7493e0b13058/kube-rbac-proxy/0.log" Dec 02 15:08:33 crc kubenswrapper[4625]: I1202 15:08:33.210948 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-q854l_15504c82-ed79-4ab3-a157-7493e0b13058/manager/0.log" Dec 02 15:08:33 crc kubenswrapper[4625]: I1202 15:08:33.249776 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-jhwcz_137e6ec9-76ad-4b65-a788-a8a38f84343f/kube-rbac-proxy/0.log" Dec 02 15:08:33 crc kubenswrapper[4625]: I1202 15:08:33.373291 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-jhwcz_137e6ec9-76ad-4b65-a788-a8a38f84343f/manager/0.log" Dec 02 15:08:33 crc kubenswrapper[4625]: I1202 15:08:33.505272 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-b882s_95a50933-f183-45d5-b8e2-aac85155551e/kube-rbac-proxy/0.log" Dec 02 15:08:33 crc kubenswrapper[4625]: I1202 15:08:33.596556 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-b882s_95a50933-f183-45d5-b8e2-aac85155551e/manager/0.log" Dec 02 15:08:33 crc kubenswrapper[4625]: I1202 15:08:33.817026 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-pr84z_73f97b4a-0c9b-4422-a7fd-e2aab20f9825/kube-rbac-proxy/0.log" Dec 02 15:08:34 crc kubenswrapper[4625]: I1202 15:08:34.041935 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-qtsvv_a686420b-bad9-418e-b729-96680afd0f07/kube-rbac-proxy/0.log" Dec 02 15:08:34 crc kubenswrapper[4625]: I1202 15:08:34.098937 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-pr84z_73f97b4a-0c9b-4422-a7fd-e2aab20f9825/manager/0.log" Dec 02 15:08:34 crc kubenswrapper[4625]: I1202 15:08:34.139465 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-qtsvv_a686420b-bad9-418e-b729-96680afd0f07/manager/0.log" Dec 02 15:08:34 crc kubenswrapper[4625]: I1202 15:08:34.361328 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-k4nlb_0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e/manager/0.log" Dec 02 15:08:34 crc kubenswrapper[4625]: I1202 15:08:34.382974 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-k4nlb_0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e/kube-rbac-proxy/0.log" Dec 02 15:08:34 crc kubenswrapper[4625]: I1202 15:08:34.554625 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-lwmht_5e58537e-7499-41c1-b154-ff06bd4dd58a/kube-rbac-proxy/0.log" Dec 02 15:08:34 crc kubenswrapper[4625]: I1202 15:08:34.631936 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-lwmht_5e58537e-7499-41c1-b154-ff06bd4dd58a/manager/0.log" Dec 02 15:08:34 crc kubenswrapper[4625]: I1202 15:08:34.699799 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-gr5bt_79d42122-959c-41e1-9c56-58788fd56100/kube-rbac-proxy/0.log" Dec 02 15:08:34 crc kubenswrapper[4625]: I1202 15:08:34.853027 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-gr5bt_79d42122-959c-41e1-9c56-58788fd56100/manager/0.log" Dec 02 15:08:35 crc kubenswrapper[4625]: I1202 15:08:35.000796 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-jwks4_910705f2-ee02-421a-a0eb-eb594d119f9e/manager/0.log" Dec 02 15:08:35 crc kubenswrapper[4625]: I1202 15:08:35.018790 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-jwks4_910705f2-ee02-421a-a0eb-eb594d119f9e/kube-rbac-proxy/0.log" Dec 02 15:08:35 crc kubenswrapper[4625]: I1202 15:08:35.240978 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-zk4xg_cc5f44ae-eba1-40ca-8391-49985c6211bd/kube-rbac-proxy/0.log" Dec 02 15:08:35 crc kubenswrapper[4625]: I1202 15:08:35.386421 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-zk4xg_cc5f44ae-eba1-40ca-8391-49985c6211bd/manager/0.log" Dec 02 15:08:35 crc kubenswrapper[4625]: I1202 15:08:35.479431 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-lg24z_d04e4d3e-b826-40ad-9955-7c7ba1379920/kube-rbac-proxy/0.log" Dec 02 15:08:35 crc kubenswrapper[4625]: I1202 15:08:35.481075 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-lg24z_d04e4d3e-b826-40ad-9955-7c7ba1379920/manager/0.log" Dec 02 15:08:35 crc kubenswrapper[4625]: I1202 15:08:35.724877 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck_a9612490-cbef-4040-a5f5-26737160de83/kube-rbac-proxy/0.log" Dec 02 15:08:35 crc kubenswrapper[4625]: I1202 15:08:35.908190 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck_a9612490-cbef-4040-a5f5-26737160de83/manager/0.log" Dec 02 15:08:36 crc kubenswrapper[4625]: I1202 15:08:36.439273 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-84d58866d9-k5nd2_aebd70d9-f01d-4141-bfc6-972472620c50/operator/0.log" Dec 02 15:08:36 crc kubenswrapper[4625]: I1202 15:08:36.506385 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gdt46_e7071f61-c3c4-4f5a-b3ea-5fc268f55dbd/registry-server/0.log" Dec 02 15:08:36 crc kubenswrapper[4625]: I1202 15:08:36.656684 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-jktk5_0f5a3014-4394-4a6f-972e-52f2ef19328f/kube-rbac-proxy/0.log" Dec 02 15:08:36 crc kubenswrapper[4625]: I1202 15:08:36.897999 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-jktk5_0f5a3014-4394-4a6f-972e-52f2ef19328f/manager/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.097256 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-qgjn7_4dff1b74-d58f-40b9-a3a6-c1ebdd498690/kube-rbac-proxy/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.149587 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-qgjn7_4dff1b74-d58f-40b9-a3a6-c1ebdd498690/manager/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.198823 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58cd586464-xpfd6_e3cfbc8e-665f-4007-a38d-714f53c48923/manager/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.308373 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7lmwf_38eaf493-09d1-441e-81a9-777174f24006/operator/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.413069 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-ls4vx_0daed4ec-6cef-4f70-bdf2-27c278868917/kube-rbac-proxy/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.475057 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-ls4vx_0daed4ec-6cef-4f70-bdf2-27c278868917/manager/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.525111 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-mmjnh_6075f378-b13f-422a-a3c0-3301d78d3fa9/kube-rbac-proxy/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.719575 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-mmjnh_6075f378-b13f-422a-a3c0-3301d78d3fa9/manager/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.741516 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-wqq4b_37a0be8e-736e-486e-a1af-abc65c34c25b/kube-rbac-proxy/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.783754 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-wqq4b_37a0be8e-736e-486e-a1af-abc65c34c25b/manager/0.log" Dec 02 15:08:37 crc kubenswrapper[4625]: I1202 15:08:37.976208 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-wpdzp_db262c5c-48c7-4749-990c-77993791ba47/kube-rbac-proxy/0.log" Dec 02 15:08:38 crc kubenswrapper[4625]: I1202 15:08:38.019487 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-wpdzp_db262c5c-48c7-4749-990c-77993791ba47/manager/0.log" Dec 02 15:09:02 crc kubenswrapper[4625]: I1202 15:09:02.863481 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sjskb_bfa9a143-ca0d-4f32-b9a7-b2acb327bedc/control-plane-machine-set-operator/0.log" Dec 02 15:09:03 crc kubenswrapper[4625]: I1202 15:09:03.140303 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-p4l8q_30f3fae9-f4d5-4f32-9498-5d2a2d801654/kube-rbac-proxy/0.log" Dec 02 15:09:03 crc kubenswrapper[4625]: I1202 15:09:03.170696 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-p4l8q_30f3fae9-f4d5-4f32-9498-5d2a2d801654/machine-api-operator/0.log" Dec 02 15:09:20 crc kubenswrapper[4625]: I1202 15:09:20.017001 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-6svm9_9143a513-bf1b-4452-bf75-f5fea106cda0/cert-manager-controller/0.log" Dec 02 15:09:20 crc kubenswrapper[4625]: I1202 15:09:20.177508 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-wtcwd_1a51a454-f4a2-4fac-9d2b-121515d4dcac/cert-manager-cainjector/0.log" Dec 02 15:09:20 crc kubenswrapper[4625]: I1202 15:09:20.312075 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-cxgzh_aeb44bf0-3409-493e-9666-17615ae63452/cert-manager-webhook/0.log" Dec 02 15:09:39 crc kubenswrapper[4625]: I1202 15:09:39.848708 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-xbmh7_c272e6ee-0c53-4601-bb13-b19116b52d78/nmstate-console-plugin/0.log" Dec 02 15:09:40 crc kubenswrapper[4625]: I1202 15:09:40.169708 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2gbcf_a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab/nmstate-handler/0.log" Dec 02 15:09:40 crc kubenswrapper[4625]: I1202 15:09:40.283216 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-n6vs9_ec26d36a-6a53-45ea-b678-ff1f2f663e4b/kube-rbac-proxy/0.log" Dec 02 15:09:40 crc kubenswrapper[4625]: I1202 15:09:40.283359 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-n6vs9_ec26d36a-6a53-45ea-b678-ff1f2f663e4b/nmstate-metrics/0.log" Dec 02 15:09:40 crc kubenswrapper[4625]: I1202 15:09:40.584987 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-7wnsd_c990b211-885a-4f31-835b-ebc7d42db8dc/nmstate-operator/0.log" Dec 02 15:09:40 crc kubenswrapper[4625]: I1202 15:09:40.659061 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-27pvd_52f9dddb-330a-4c13-9bb7-6a7766b6c4ec/nmstate-webhook/0.log" Dec 02 15:09:49 crc kubenswrapper[4625]: I1202 15:09:49.271292 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:09:49 crc kubenswrapper[4625]: I1202 15:09:49.271839 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:10:03 crc kubenswrapper[4625]: I1202 15:10:03.492439 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-2jqbl_5bf7d269-353b-4ac4-a7e5-02c0cd01d62a/controller/0.log" Dec 02 15:10:03 crc kubenswrapper[4625]: I1202 15:10:03.596078 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-2jqbl_5bf7d269-353b-4ac4-a7e5-02c0cd01d62a/kube-rbac-proxy/0.log" Dec 02 15:10:03 crc kubenswrapper[4625]: I1202 15:10:03.734095 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-frr-files/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.061503 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-reloader/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.087485 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-frr-files/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.091463 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-metrics/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.150750 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-reloader/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.412734 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-reloader/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.415621 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-frr-files/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.484983 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-metrics/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.504639 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-metrics/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.791761 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-metrics/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.870682 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-reloader/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.892651 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-frr-files/0.log" Dec 02 15:10:04 crc kubenswrapper[4625]: I1202 15:10:04.937724 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/controller/0.log" Dec 02 15:10:05 crc kubenswrapper[4625]: I1202 15:10:05.129718 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/frr-metrics/0.log" Dec 02 15:10:05 crc kubenswrapper[4625]: I1202 15:10:05.178209 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/kube-rbac-proxy/0.log" Dec 02 15:10:05 crc kubenswrapper[4625]: I1202 15:10:05.299734 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/kube-rbac-proxy-frr/0.log" Dec 02 15:10:05 crc kubenswrapper[4625]: I1202 15:10:05.537371 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/reloader/0.log" Dec 02 15:10:05 crc kubenswrapper[4625]: I1202 15:10:05.589921 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-72kpb_aad37202-ae48-4da9-b478-fad57dd764f2/frr-k8s-webhook-server/0.log" Dec 02 15:10:06 crc kubenswrapper[4625]: I1202 15:10:06.089287 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5f95f47f79-qms5t_3fd6af02-5a39-495f-8365-cd8ec3f3b051/manager/0.log" Dec 02 15:10:06 crc kubenswrapper[4625]: I1202 15:10:06.330933 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/frr/0.log" Dec 02 15:10:07 crc kubenswrapper[4625]: I1202 15:10:07.150693 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-867d4dc474-l4c4v_3980c773-00e8-4019-972e-e0f2f9724185/webhook-server/0.log" Dec 02 15:10:07 crc kubenswrapper[4625]: I1202 15:10:07.206128 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vrxh6_76b2f8e3-7f6f-4592-a2e1-542b76f8872d/kube-rbac-proxy/0.log" Dec 02 15:10:07 crc kubenswrapper[4625]: I1202 15:10:07.719163 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vrxh6_76b2f8e3-7f6f-4592-a2e1-542b76f8872d/speaker/0.log" Dec 02 15:10:19 crc kubenswrapper[4625]: I1202 15:10:19.271823 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:10:19 crc kubenswrapper[4625]: I1202 15:10:19.272535 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:10:25 crc kubenswrapper[4625]: I1202 15:10:25.790281 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/util/0.log" Dec 02 15:10:25 crc kubenswrapper[4625]: I1202 15:10:25.975538 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/util/0.log" Dec 02 15:10:26 crc kubenswrapper[4625]: I1202 15:10:26.058343 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/pull/0.log" Dec 02 15:10:26 crc kubenswrapper[4625]: I1202 15:10:26.113248 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/pull/0.log" Dec 02 15:10:26 crc kubenswrapper[4625]: I1202 15:10:26.383591 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/pull/0.log" Dec 02 15:10:26 crc kubenswrapper[4625]: I1202 15:10:26.401959 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/util/0.log" Dec 02 15:10:26 crc kubenswrapper[4625]: I1202 15:10:26.614617 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/extract/0.log" Dec 02 15:10:26 crc kubenswrapper[4625]: I1202 15:10:26.723018 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/util/0.log" Dec 02 15:10:26 crc kubenswrapper[4625]: I1202 15:10:26.926961 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/util/0.log" Dec 02 15:10:26 crc kubenswrapper[4625]: I1202 15:10:26.961033 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/pull/0.log" Dec 02 15:10:27 crc kubenswrapper[4625]: I1202 15:10:27.022797 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/pull/0.log" Dec 02 15:10:27 crc kubenswrapper[4625]: I1202 15:10:27.896599 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/pull/0.log" Dec 02 15:10:27 crc kubenswrapper[4625]: I1202 15:10:27.920934 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/util/0.log" Dec 02 15:10:27 crc kubenswrapper[4625]: I1202 15:10:27.958413 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/extract/0.log" Dec 02 15:10:28 crc kubenswrapper[4625]: I1202 15:10:28.122453 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-utilities/0.log" Dec 02 15:10:28 crc kubenswrapper[4625]: I1202 15:10:28.348607 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-utilities/0.log" Dec 02 15:10:28 crc kubenswrapper[4625]: I1202 15:10:28.369322 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-content/0.log" Dec 02 15:10:28 crc kubenswrapper[4625]: I1202 15:10:28.373874 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-content/0.log" Dec 02 15:10:28 crc kubenswrapper[4625]: I1202 15:10:28.599959 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-utilities/0.log" Dec 02 15:10:28 crc kubenswrapper[4625]: I1202 15:10:28.777182 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-content/0.log" Dec 02 15:10:28 crc kubenswrapper[4625]: I1202 15:10:28.916190 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-utilities/0.log" Dec 02 15:10:29 crc kubenswrapper[4625]: I1202 15:10:29.200565 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/registry-server/0.log" Dec 02 15:10:29 crc kubenswrapper[4625]: I1202 15:10:29.593375 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-content/0.log" Dec 02 15:10:29 crc kubenswrapper[4625]: I1202 15:10:29.603881 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-utilities/0.log" Dec 02 15:10:29 crc kubenswrapper[4625]: I1202 15:10:29.674752 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-content/0.log" Dec 02 15:10:30 crc kubenswrapper[4625]: I1202 15:10:30.215159 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-content/0.log" Dec 02 15:10:30 crc kubenswrapper[4625]: I1202 15:10:30.256452 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-utilities/0.log" Dec 02 15:10:30 crc kubenswrapper[4625]: I1202 15:10:30.509650 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lm5sj_82f63ecf-aa95-429e-a39a-796125dfa29c/marketplace-operator/0.log" Dec 02 15:10:30 crc kubenswrapper[4625]: I1202 15:10:30.654244 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/registry-server/0.log" Dec 02 15:10:30 crc kubenswrapper[4625]: I1202 15:10:30.729603 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-utilities/0.log" Dec 02 15:10:30 crc kubenswrapper[4625]: I1202 15:10:30.897539 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-utilities/0.log" Dec 02 15:10:30 crc kubenswrapper[4625]: I1202 15:10:30.964537 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-content/0.log" Dec 02 15:10:31 crc kubenswrapper[4625]: I1202 15:10:31.014271 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-content/0.log" Dec 02 15:10:31 crc kubenswrapper[4625]: I1202 15:10:31.276915 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-utilities/0.log" Dec 02 15:10:31 crc kubenswrapper[4625]: I1202 15:10:31.317554 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-utilities/0.log" Dec 02 15:10:31 crc kubenswrapper[4625]: I1202 15:10:31.365423 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-content/0.log" Dec 02 15:10:31 crc kubenswrapper[4625]: I1202 15:10:31.542175 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/registry-server/0.log" Dec 02 15:10:31 crc kubenswrapper[4625]: I1202 15:10:31.677041 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-utilities/0.log" Dec 02 15:10:31 crc kubenswrapper[4625]: I1202 15:10:31.729437 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-content/0.log" Dec 02 15:10:31 crc kubenswrapper[4625]: I1202 15:10:31.760595 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-content/0.log" Dec 02 15:10:32 crc kubenswrapper[4625]: I1202 15:10:32.012582 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-content/0.log" Dec 02 15:10:32 crc kubenswrapper[4625]: I1202 15:10:32.033575 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-utilities/0.log" Dec 02 15:10:32 crc kubenswrapper[4625]: I1202 15:10:32.561210 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/registry-server/0.log" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.220953 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ccb42"] Dec 02 15:10:34 crc kubenswrapper[4625]: E1202 15:10:34.221757 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284ffb3e-47ee-460c-9c95-0ffc711c703d" containerName="container-00" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.221784 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="284ffb3e-47ee-460c-9c95-0ffc711c703d" containerName="container-00" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.225196 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="284ffb3e-47ee-460c-9c95-0ffc711c703d" containerName="container-00" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.226930 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.238019 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccb42"] Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.240973 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-utilities\") pod \"redhat-operators-ccb42\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.241057 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfmtp\" (UniqueName: \"kubernetes.io/projected/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-kube-api-access-gfmtp\") pod \"redhat-operators-ccb42\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.241153 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-catalog-content\") pod \"redhat-operators-ccb42\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.343449 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfmtp\" (UniqueName: \"kubernetes.io/projected/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-kube-api-access-gfmtp\") pod \"redhat-operators-ccb42\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.343943 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-catalog-content\") pod \"redhat-operators-ccb42\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.344391 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-catalog-content\") pod \"redhat-operators-ccb42\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.344551 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-utilities\") pod \"redhat-operators-ccb42\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.344802 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-utilities\") pod \"redhat-operators-ccb42\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.366140 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfmtp\" (UniqueName: \"kubernetes.io/projected/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-kube-api-access-gfmtp\") pod \"redhat-operators-ccb42\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:34 crc kubenswrapper[4625]: I1202 15:10:34.586444 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:35 crc kubenswrapper[4625]: I1202 15:10:35.393219 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccb42"] Dec 02 15:10:35 crc kubenswrapper[4625]: I1202 15:10:35.818628 4625 generic.go:334] "Generic (PLEG): container finished" podID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerID="4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b" exitCode=0 Dec 02 15:10:35 crc kubenswrapper[4625]: I1202 15:10:35.818769 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccb42" event={"ID":"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab","Type":"ContainerDied","Data":"4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b"} Dec 02 15:10:35 crc kubenswrapper[4625]: I1202 15:10:35.819346 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccb42" event={"ID":"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab","Type":"ContainerStarted","Data":"596bb85b2e54064d3a2fb4f6578fd2049b496027bfe88ec2c15ffb3e81787bc4"} Dec 02 15:10:35 crc kubenswrapper[4625]: I1202 15:10:35.822170 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:10:37 crc kubenswrapper[4625]: I1202 15:10:37.844985 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccb42" event={"ID":"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab","Type":"ContainerStarted","Data":"f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487"} Dec 02 15:10:39 crc kubenswrapper[4625]: I1202 15:10:39.890410 4625 generic.go:334] "Generic (PLEG): container finished" podID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerID="f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487" exitCode=0 Dec 02 15:10:39 crc kubenswrapper[4625]: I1202 15:10:39.890483 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccb42" event={"ID":"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab","Type":"ContainerDied","Data":"f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487"} Dec 02 15:10:41 crc kubenswrapper[4625]: I1202 15:10:41.945684 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccb42" event={"ID":"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab","Type":"ContainerStarted","Data":"9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea"} Dec 02 15:10:41 crc kubenswrapper[4625]: I1202 15:10:41.987006 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ccb42" podStartSLOduration=3.426646544 podStartE2EDuration="7.986962371s" podCreationTimestamp="2025-12-02 15:10:34 +0000 UTC" firstStartedPulling="2025-12-02 15:10:35.821802375 +0000 UTC m=+5191.783979450" lastFinishedPulling="2025-12-02 15:10:40.382118192 +0000 UTC m=+5196.344295277" observedRunningTime="2025-12-02 15:10:41.978665605 +0000 UTC m=+5197.940842680" watchObservedRunningTime="2025-12-02 15:10:41.986962371 +0000 UTC m=+5197.949139436" Dec 02 15:10:44 crc kubenswrapper[4625]: I1202 15:10:44.587173 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:44 crc kubenswrapper[4625]: I1202 15:10:44.588396 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:45 crc kubenswrapper[4625]: I1202 15:10:45.670540 4625 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ccb42" podUID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerName="registry-server" probeResult="failure" output=< Dec 02 15:10:45 crc kubenswrapper[4625]: timeout: failed to connect service ":50051" within 1s Dec 02 15:10:45 crc kubenswrapper[4625]: > Dec 02 15:10:49 crc kubenswrapper[4625]: I1202 15:10:49.271774 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:10:49 crc kubenswrapper[4625]: I1202 15:10:49.272800 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:10:49 crc kubenswrapper[4625]: I1202 15:10:49.273461 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 15:10:49 crc kubenswrapper[4625]: I1202 15:10:49.275048 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:10:49 crc kubenswrapper[4625]: I1202 15:10:49.275123 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" gracePeriod=600 Dec 02 15:10:49 crc kubenswrapper[4625]: E1202 15:10:49.407047 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:10:50 crc kubenswrapper[4625]: I1202 15:10:50.202734 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" exitCode=0 Dec 02 15:10:50 crc kubenswrapper[4625]: I1202 15:10:50.202860 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca"} Dec 02 15:10:50 crc kubenswrapper[4625]: I1202 15:10:50.203309 4625 scope.go:117] "RemoveContainer" containerID="1351779b7646efe271250a1266eca5822ee3d9c4190a100848bb20492041ab1d" Dec 02 15:10:50 crc kubenswrapper[4625]: I1202 15:10:50.204263 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:10:50 crc kubenswrapper[4625]: E1202 15:10:50.204634 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:10:54 crc kubenswrapper[4625]: I1202 15:10:54.650214 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:54 crc kubenswrapper[4625]: I1202 15:10:54.706263 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:54 crc kubenswrapper[4625]: I1202 15:10:54.911789 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccb42"] Dec 02 15:10:56 crc kubenswrapper[4625]: I1202 15:10:56.282064 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ccb42" podUID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerName="registry-server" containerID="cri-o://9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea" gracePeriod=2 Dec 02 15:10:56 crc kubenswrapper[4625]: E1202 15:10:56.492785 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd172cf56_47f9_42fd_a4cd_8be6ca0e4cab.slice/crio-conmon-9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea.scope\": RecentStats: unable to find data in memory cache]" Dec 02 15:10:56 crc kubenswrapper[4625]: I1202 15:10:56.836178 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:56 crc kubenswrapper[4625]: I1202 15:10:56.960725 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfmtp\" (UniqueName: \"kubernetes.io/projected/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-kube-api-access-gfmtp\") pod \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " Dec 02 15:10:56 crc kubenswrapper[4625]: I1202 15:10:56.960902 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-catalog-content\") pod \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " Dec 02 15:10:56 crc kubenswrapper[4625]: I1202 15:10:56.961068 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-utilities\") pod \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\" (UID: \"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab\") " Dec 02 15:10:56 crc kubenswrapper[4625]: I1202 15:10:56.962781 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-utilities" (OuterVolumeSpecName: "utilities") pod "d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" (UID: "d172cf56-47f9-42fd-a4cd-8be6ca0e4cab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:10:56 crc kubenswrapper[4625]: I1202 15:10:56.970497 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-kube-api-access-gfmtp" (OuterVolumeSpecName: "kube-api-access-gfmtp") pod "d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" (UID: "d172cf56-47f9-42fd-a4cd-8be6ca0e4cab"). InnerVolumeSpecName "kube-api-access-gfmtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.063772 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.063815 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfmtp\" (UniqueName: \"kubernetes.io/projected/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-kube-api-access-gfmtp\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.085292 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" (UID: "d172cf56-47f9-42fd-a4cd-8be6ca0e4cab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.165726 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.295205 4625 generic.go:334] "Generic (PLEG): container finished" podID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerID="9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea" exitCode=0 Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.295295 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccb42" event={"ID":"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab","Type":"ContainerDied","Data":"9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea"} Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.296640 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccb42" event={"ID":"d172cf56-47f9-42fd-a4cd-8be6ca0e4cab","Type":"ContainerDied","Data":"596bb85b2e54064d3a2fb4f6578fd2049b496027bfe88ec2c15ffb3e81787bc4"} Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.296731 4625 scope.go:117] "RemoveContainer" containerID="9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.295409 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccb42" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.319515 4625 scope.go:117] "RemoveContainer" containerID="f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.339589 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccb42"] Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.350132 4625 scope.go:117] "RemoveContainer" containerID="4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.352266 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ccb42"] Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.384663 4625 scope.go:117] "RemoveContainer" containerID="9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea" Dec 02 15:10:57 crc kubenswrapper[4625]: E1202 15:10:57.385391 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea\": container with ID starting with 9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea not found: ID does not exist" containerID="9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.385439 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea"} err="failed to get container status \"9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea\": rpc error: code = NotFound desc = could not find container \"9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea\": container with ID starting with 9bdc852e740603f7fbbd3b2a40ed61a1847516803ffe68067076f2c93511ccea not found: ID does not exist" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.385463 4625 scope.go:117] "RemoveContainer" containerID="f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487" Dec 02 15:10:57 crc kubenswrapper[4625]: E1202 15:10:57.386012 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487\": container with ID starting with f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487 not found: ID does not exist" containerID="f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.386064 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487"} err="failed to get container status \"f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487\": rpc error: code = NotFound desc = could not find container \"f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487\": container with ID starting with f94fee6593e102fccf710ef8d46e32de7dd45109d58bc06cb3e807c2ac579487 not found: ID does not exist" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.386098 4625 scope.go:117] "RemoveContainer" containerID="4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b" Dec 02 15:10:57 crc kubenswrapper[4625]: E1202 15:10:57.386560 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b\": container with ID starting with 4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b not found: ID does not exist" containerID="4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b" Dec 02 15:10:57 crc kubenswrapper[4625]: I1202 15:10:57.386624 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b"} err="failed to get container status \"4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b\": rpc error: code = NotFound desc = could not find container \"4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b\": container with ID starting with 4dfc4fc9cba00f477bcf7f9d9a9e42f3c97ec16c2aaf9b93782ed7f0f119c29b not found: ID does not exist" Dec 02 15:10:58 crc kubenswrapper[4625]: I1202 15:10:58.871615 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" path="/var/lib/kubelet/pods/d172cf56-47f9-42fd-a4cd-8be6ca0e4cab/volumes" Dec 02 15:11:01 crc kubenswrapper[4625]: I1202 15:11:01.856280 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:11:01 crc kubenswrapper[4625]: E1202 15:11:01.856830 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:11:13 crc kubenswrapper[4625]: I1202 15:11:13.857277 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:11:13 crc kubenswrapper[4625]: E1202 15:11:13.858403 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:11:24 crc kubenswrapper[4625]: I1202 15:11:24.864669 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:11:24 crc kubenswrapper[4625]: E1202 15:11:24.866097 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:11:39 crc kubenswrapper[4625]: I1202 15:11:39.856434 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:11:39 crc kubenswrapper[4625]: E1202 15:11:39.857378 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:11:53 crc kubenswrapper[4625]: I1202 15:11:53.857226 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:11:53 crc kubenswrapper[4625]: E1202 15:11:53.858086 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:12:07 crc kubenswrapper[4625]: I1202 15:12:07.856883 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:12:07 crc kubenswrapper[4625]: E1202 15:12:07.860030 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:12:18 crc kubenswrapper[4625]: I1202 15:12:18.856717 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:12:18 crc kubenswrapper[4625]: E1202 15:12:18.857687 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:12:20 crc kubenswrapper[4625]: I1202 15:12:20.612171 4625 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6fb4775b59-xb9rg" podUID="afdaf455-8ee8-42c2-8086-305834a075a5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 02 15:12:30 crc kubenswrapper[4625]: I1202 15:12:30.857270 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:12:30 crc kubenswrapper[4625]: E1202 15:12:30.858493 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:12:41 crc kubenswrapper[4625]: I1202 15:12:41.857692 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:12:41 crc kubenswrapper[4625]: E1202 15:12:41.860421 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:12:55 crc kubenswrapper[4625]: I1202 15:12:55.857139 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:12:55 crc kubenswrapper[4625]: E1202 15:12:55.858442 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:13:08 crc kubenswrapper[4625]: I1202 15:13:08.858632 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:13:08 crc kubenswrapper[4625]: E1202 15:13:08.859689 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:13:09 crc kubenswrapper[4625]: I1202 15:13:09.389884 4625 generic.go:334] "Generic (PLEG): container finished" podID="6e7fcdb7-9584-46f2-9522-e075dc26b408" containerID="dc6b8c1a2edfc923a8b14693ad286662ce580969cdcc7aab11d690e7db13d37b" exitCode=0 Dec 02 15:13:09 crc kubenswrapper[4625]: I1202 15:13:09.389974 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bmmjp/must-gather-6hcpw" event={"ID":"6e7fcdb7-9584-46f2-9522-e075dc26b408","Type":"ContainerDied","Data":"dc6b8c1a2edfc923a8b14693ad286662ce580969cdcc7aab11d690e7db13d37b"} Dec 02 15:13:09 crc kubenswrapper[4625]: I1202 15:13:09.391078 4625 scope.go:117] "RemoveContainer" containerID="dc6b8c1a2edfc923a8b14693ad286662ce580969cdcc7aab11d690e7db13d37b" Dec 02 15:13:10 crc kubenswrapper[4625]: I1202 15:13:10.062981 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bmmjp_must-gather-6hcpw_6e7fcdb7-9584-46f2-9522-e075dc26b408/gather/0.log" Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.102300 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bmmjp/must-gather-6hcpw"] Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.106151 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bmmjp/must-gather-6hcpw" podUID="6e7fcdb7-9584-46f2-9522-e075dc26b408" containerName="copy" containerID="cri-o://bd535d254659630fc6bd7e4fdff4a62bbff7a9a9f165f1bf8b1a338a7135ea1b" gracePeriod=2 Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.123611 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bmmjp/must-gather-6hcpw"] Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.528291 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bmmjp_must-gather-6hcpw_6e7fcdb7-9584-46f2-9522-e075dc26b408/copy/0.log" Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.529961 4625 generic.go:334] "Generic (PLEG): container finished" podID="6e7fcdb7-9584-46f2-9522-e075dc26b408" containerID="bd535d254659630fc6bd7e4fdff4a62bbff7a9a9f165f1bf8b1a338a7135ea1b" exitCode=143 Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.680405 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bmmjp_must-gather-6hcpw_6e7fcdb7-9584-46f2-9522-e075dc26b408/copy/0.log" Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.681260 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/must-gather-6hcpw" Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.789246 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8snfn\" (UniqueName: \"kubernetes.io/projected/6e7fcdb7-9584-46f2-9522-e075dc26b408-kube-api-access-8snfn\") pod \"6e7fcdb7-9584-46f2-9522-e075dc26b408\" (UID: \"6e7fcdb7-9584-46f2-9522-e075dc26b408\") " Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.789401 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e7fcdb7-9584-46f2-9522-e075dc26b408-must-gather-output\") pod \"6e7fcdb7-9584-46f2-9522-e075dc26b408\" (UID: \"6e7fcdb7-9584-46f2-9522-e075dc26b408\") " Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.796974 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7fcdb7-9584-46f2-9522-e075dc26b408-kube-api-access-8snfn" (OuterVolumeSpecName: "kube-api-access-8snfn") pod "6e7fcdb7-9584-46f2-9522-e075dc26b408" (UID: "6e7fcdb7-9584-46f2-9522-e075dc26b408"). InnerVolumeSpecName "kube-api-access-8snfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.892564 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8snfn\" (UniqueName: \"kubernetes.io/projected/6e7fcdb7-9584-46f2-9522-e075dc26b408-kube-api-access-8snfn\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.971158 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7fcdb7-9584-46f2-9522-e075dc26b408-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6e7fcdb7-9584-46f2-9522-e075dc26b408" (UID: "6e7fcdb7-9584-46f2-9522-e075dc26b408"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:13:19 crc kubenswrapper[4625]: I1202 15:13:19.995002 4625 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e7fcdb7-9584-46f2-9522-e075dc26b408-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 02 15:13:20 crc kubenswrapper[4625]: I1202 15:13:20.548870 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bmmjp_must-gather-6hcpw_6e7fcdb7-9584-46f2-9522-e075dc26b408/copy/0.log" Dec 02 15:13:20 crc kubenswrapper[4625]: I1202 15:13:20.549476 4625 scope.go:117] "RemoveContainer" containerID="bd535d254659630fc6bd7e4fdff4a62bbff7a9a9f165f1bf8b1a338a7135ea1b" Dec 02 15:13:20 crc kubenswrapper[4625]: I1202 15:13:20.549655 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bmmjp/must-gather-6hcpw" Dec 02 15:13:20 crc kubenswrapper[4625]: I1202 15:13:20.581045 4625 scope.go:117] "RemoveContainer" containerID="dc6b8c1a2edfc923a8b14693ad286662ce580969cdcc7aab11d690e7db13d37b" Dec 02 15:13:20 crc kubenswrapper[4625]: I1202 15:13:20.872025 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7fcdb7-9584-46f2-9522-e075dc26b408" path="/var/lib/kubelet/pods/6e7fcdb7-9584-46f2-9522-e075dc26b408/volumes" Dec 02 15:13:22 crc kubenswrapper[4625]: I1202 15:13:22.857334 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:13:22 crc kubenswrapper[4625]: E1202 15:13:22.858128 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:13:34 crc kubenswrapper[4625]: I1202 15:13:34.872982 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:13:34 crc kubenswrapper[4625]: E1202 15:13:34.874574 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:13:39 crc kubenswrapper[4625]: I1202 15:13:39.028204 4625 scope.go:117] "RemoveContainer" containerID="9d1a7508198113f9df36cc33658a8fec6801e614959c3b44912364400007d2f1" Dec 02 15:13:47 crc kubenswrapper[4625]: I1202 15:13:47.857431 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:13:47 crc kubenswrapper[4625]: E1202 15:13:47.860688 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:13:58 crc kubenswrapper[4625]: I1202 15:13:58.858181 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:13:58 crc kubenswrapper[4625]: E1202 15:13:58.859147 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.607410 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rq7fp"] Dec 02 15:14:01 crc kubenswrapper[4625]: E1202 15:14:01.608674 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerName="registry-server" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.608696 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerName="registry-server" Dec 02 15:14:01 crc kubenswrapper[4625]: E1202 15:14:01.608720 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7fcdb7-9584-46f2-9522-e075dc26b408" containerName="gather" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.608727 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7fcdb7-9584-46f2-9522-e075dc26b408" containerName="gather" Dec 02 15:14:01 crc kubenswrapper[4625]: E1202 15:14:01.608746 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7fcdb7-9584-46f2-9522-e075dc26b408" containerName="copy" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.608753 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7fcdb7-9584-46f2-9522-e075dc26b408" containerName="copy" Dec 02 15:14:01 crc kubenswrapper[4625]: E1202 15:14:01.608796 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerName="extract-content" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.608804 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerName="extract-content" Dec 02 15:14:01 crc kubenswrapper[4625]: E1202 15:14:01.608823 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerName="extract-utilities" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.608830 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerName="extract-utilities" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.609569 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7fcdb7-9584-46f2-9522-e075dc26b408" containerName="copy" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.609619 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="d172cf56-47f9-42fd-a4cd-8be6ca0e4cab" containerName="registry-server" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.609648 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7fcdb7-9584-46f2-9522-e075dc26b408" containerName="gather" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.611798 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.645003 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rq7fp"] Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.717000 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-catalog-content\") pod \"certified-operators-rq7fp\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.717052 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j22v\" (UniqueName: \"kubernetes.io/projected/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-kube-api-access-2j22v\") pod \"certified-operators-rq7fp\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.717258 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-utilities\") pod \"certified-operators-rq7fp\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.819869 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-catalog-content\") pod \"certified-operators-rq7fp\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.819926 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j22v\" (UniqueName: \"kubernetes.io/projected/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-kube-api-access-2j22v\") pod \"certified-operators-rq7fp\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.820373 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-utilities\") pod \"certified-operators-rq7fp\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.820597 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-catalog-content\") pod \"certified-operators-rq7fp\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:01 crc kubenswrapper[4625]: I1202 15:14:01.820703 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-utilities\") pod \"certified-operators-rq7fp\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:02 crc kubenswrapper[4625]: I1202 15:14:02.077715 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j22v\" (UniqueName: \"kubernetes.io/projected/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-kube-api-access-2j22v\") pod \"certified-operators-rq7fp\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:02 crc kubenswrapper[4625]: I1202 15:14:02.231665 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:03 crc kubenswrapper[4625]: I1202 15:14:03.054115 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rq7fp"] Dec 02 15:14:03 crc kubenswrapper[4625]: I1202 15:14:03.751012 4625 generic.go:334] "Generic (PLEG): container finished" podID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerID="83ddb3b6f2c47b8740bbb27830edbdc34d80ee41eb8018d03a6f2b1d2e9dbeb0" exitCode=0 Dec 02 15:14:03 crc kubenswrapper[4625]: I1202 15:14:03.751094 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq7fp" event={"ID":"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb","Type":"ContainerDied","Data":"83ddb3b6f2c47b8740bbb27830edbdc34d80ee41eb8018d03a6f2b1d2e9dbeb0"} Dec 02 15:14:03 crc kubenswrapper[4625]: I1202 15:14:03.751451 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq7fp" event={"ID":"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb","Type":"ContainerStarted","Data":"90bedad9e54bb53e9a778efd8aae03ce5dfd4a4c6021cd3dcb46d833d7a68f49"} Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.401392 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8drd9"] Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.404008 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.421029 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8drd9"] Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.490636 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-utilities\") pod \"redhat-marketplace-8drd9\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.490728 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v8ln\" (UniqueName: \"kubernetes.io/projected/20072bfc-cac8-4895-b1b2-abccd5c1235f-kube-api-access-2v8ln\") pod \"redhat-marketplace-8drd9\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.491199 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-catalog-content\") pod \"redhat-marketplace-8drd9\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.593073 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-catalog-content\") pod \"redhat-marketplace-8drd9\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.593181 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-utilities\") pod \"redhat-marketplace-8drd9\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.593234 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v8ln\" (UniqueName: \"kubernetes.io/projected/20072bfc-cac8-4895-b1b2-abccd5c1235f-kube-api-access-2v8ln\") pod \"redhat-marketplace-8drd9\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.594108 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-catalog-content\") pod \"redhat-marketplace-8drd9\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.594188 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-utilities\") pod \"redhat-marketplace-8drd9\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.620222 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v8ln\" (UniqueName: \"kubernetes.io/projected/20072bfc-cac8-4895-b1b2-abccd5c1235f-kube-api-access-2v8ln\") pod \"redhat-marketplace-8drd9\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:04 crc kubenswrapper[4625]: I1202 15:14:04.732332 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:05 crc kubenswrapper[4625]: I1202 15:14:05.315776 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8drd9"] Dec 02 15:14:05 crc kubenswrapper[4625]: I1202 15:14:05.792018 4625 generic.go:334] "Generic (PLEG): container finished" podID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerID="e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829" exitCode=0 Dec 02 15:14:05 crc kubenswrapper[4625]: I1202 15:14:05.792140 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8drd9" event={"ID":"20072bfc-cac8-4895-b1b2-abccd5c1235f","Type":"ContainerDied","Data":"e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829"} Dec 02 15:14:05 crc kubenswrapper[4625]: I1202 15:14:05.792180 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8drd9" event={"ID":"20072bfc-cac8-4895-b1b2-abccd5c1235f","Type":"ContainerStarted","Data":"8d4f02d84e96006452a76d751004d41f6dcad228d6d3e182c7db14f0e9d86613"} Dec 02 15:14:05 crc kubenswrapper[4625]: I1202 15:14:05.796251 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq7fp" event={"ID":"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb","Type":"ContainerStarted","Data":"dec362e8c92039708385c276beb0d39b2806aa0dfc3d468008a3fe3950231220"} Dec 02 15:14:06 crc kubenswrapper[4625]: I1202 15:14:06.830762 4625 generic.go:334] "Generic (PLEG): container finished" podID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerID="dec362e8c92039708385c276beb0d39b2806aa0dfc3d468008a3fe3950231220" exitCode=0 Dec 02 15:14:06 crc kubenswrapper[4625]: I1202 15:14:06.830826 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq7fp" event={"ID":"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb","Type":"ContainerDied","Data":"dec362e8c92039708385c276beb0d39b2806aa0dfc3d468008a3fe3950231220"} Dec 02 15:14:06 crc kubenswrapper[4625]: I1202 15:14:06.838253 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8drd9" event={"ID":"20072bfc-cac8-4895-b1b2-abccd5c1235f","Type":"ContainerStarted","Data":"927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68"} Dec 02 15:14:07 crc kubenswrapper[4625]: I1202 15:14:07.851047 4625 generic.go:334] "Generic (PLEG): container finished" podID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerID="927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68" exitCode=0 Dec 02 15:14:07 crc kubenswrapper[4625]: I1202 15:14:07.851129 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8drd9" event={"ID":"20072bfc-cac8-4895-b1b2-abccd5c1235f","Type":"ContainerDied","Data":"927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68"} Dec 02 15:14:07 crc kubenswrapper[4625]: I1202 15:14:07.865440 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq7fp" event={"ID":"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb","Type":"ContainerStarted","Data":"665899b419420c27fde473c54941515439864656b0151419137c8f4653f1eb3a"} Dec 02 15:14:07 crc kubenswrapper[4625]: I1202 15:14:07.903230 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rq7fp" podStartSLOduration=3.324143748 podStartE2EDuration="6.903200351s" podCreationTimestamp="2025-12-02 15:14:01 +0000 UTC" firstStartedPulling="2025-12-02 15:14:03.754791847 +0000 UTC m=+5399.716968922" lastFinishedPulling="2025-12-02 15:14:07.33384845 +0000 UTC m=+5403.296025525" observedRunningTime="2025-12-02 15:14:07.900898408 +0000 UTC m=+5403.863075483" watchObservedRunningTime="2025-12-02 15:14:07.903200351 +0000 UTC m=+5403.865377426" Dec 02 15:14:09 crc kubenswrapper[4625]: I1202 15:14:09.857449 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:14:09 crc kubenswrapper[4625]: E1202 15:14:09.858888 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:14:09 crc kubenswrapper[4625]: I1202 15:14:09.900793 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8drd9" event={"ID":"20072bfc-cac8-4895-b1b2-abccd5c1235f","Type":"ContainerStarted","Data":"91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5"} Dec 02 15:14:09 crc kubenswrapper[4625]: I1202 15:14:09.928028 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8drd9" podStartSLOduration=2.962900507 podStartE2EDuration="5.927998311s" podCreationTimestamp="2025-12-02 15:14:04 +0000 UTC" firstStartedPulling="2025-12-02 15:14:05.794560193 +0000 UTC m=+5401.756737278" lastFinishedPulling="2025-12-02 15:14:08.759658017 +0000 UTC m=+5404.721835082" observedRunningTime="2025-12-02 15:14:09.923819298 +0000 UTC m=+5405.885996423" watchObservedRunningTime="2025-12-02 15:14:09.927998311 +0000 UTC m=+5405.890175386" Dec 02 15:14:12 crc kubenswrapper[4625]: I1202 15:14:12.231891 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:12 crc kubenswrapper[4625]: I1202 15:14:12.232374 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:12 crc kubenswrapper[4625]: I1202 15:14:12.776420 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:13 crc kubenswrapper[4625]: I1202 15:14:13.000002 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:13 crc kubenswrapper[4625]: I1202 15:14:13.990190 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rq7fp"] Dec 02 15:14:14 crc kubenswrapper[4625]: I1202 15:14:14.732719 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:14 crc kubenswrapper[4625]: I1202 15:14:14.733303 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:14 crc kubenswrapper[4625]: I1202 15:14:14.981466 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rq7fp" podUID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerName="registry-server" containerID="cri-o://665899b419420c27fde473c54941515439864656b0151419137c8f4653f1eb3a" gracePeriod=2 Dec 02 15:14:15 crc kubenswrapper[4625]: I1202 15:14:15.221584 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:15 crc kubenswrapper[4625]: I1202 15:14:15.996186 4625 generic.go:334] "Generic (PLEG): container finished" podID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerID="665899b419420c27fde473c54941515439864656b0151419137c8f4653f1eb3a" exitCode=0 Dec 02 15:14:15 crc kubenswrapper[4625]: I1202 15:14:15.996363 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq7fp" event={"ID":"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb","Type":"ContainerDied","Data":"665899b419420c27fde473c54941515439864656b0151419137c8f4653f1eb3a"} Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.078461 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.252829 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.365728 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-utilities\") pod \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.366417 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-catalog-content\") pod \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.366536 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j22v\" (UniqueName: \"kubernetes.io/projected/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-kube-api-access-2j22v\") pod \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\" (UID: \"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb\") " Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.366891 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-utilities" (OuterVolumeSpecName: "utilities") pod "1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" (UID: "1e745f82-6ed4-4b13-9bbf-55ef5a4558cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.367199 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.399749 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-kube-api-access-2j22v" (OuterVolumeSpecName: "kube-api-access-2j22v") pod "1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" (UID: "1e745f82-6ed4-4b13-9bbf-55ef5a4558cb"). InnerVolumeSpecName "kube-api-access-2j22v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.425158 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" (UID: "1e745f82-6ed4-4b13-9bbf-55ef5a4558cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.432816 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8drd9"] Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.470065 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:16 crc kubenswrapper[4625]: I1202 15:14:16.470354 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j22v\" (UniqueName: \"kubernetes.io/projected/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb-kube-api-access-2j22v\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:17 crc kubenswrapper[4625]: I1202 15:14:17.011013 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rq7fp" Dec 02 15:14:17 crc kubenswrapper[4625]: I1202 15:14:17.011583 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rq7fp" event={"ID":"1e745f82-6ed4-4b13-9bbf-55ef5a4558cb","Type":"ContainerDied","Data":"90bedad9e54bb53e9a778efd8aae03ce5dfd4a4c6021cd3dcb46d833d7a68f49"} Dec 02 15:14:17 crc kubenswrapper[4625]: I1202 15:14:17.011629 4625 scope.go:117] "RemoveContainer" containerID="665899b419420c27fde473c54941515439864656b0151419137c8f4653f1eb3a" Dec 02 15:14:17 crc kubenswrapper[4625]: I1202 15:14:17.045769 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rq7fp"] Dec 02 15:14:17 crc kubenswrapper[4625]: I1202 15:14:17.051279 4625 scope.go:117] "RemoveContainer" containerID="dec362e8c92039708385c276beb0d39b2806aa0dfc3d468008a3fe3950231220" Dec 02 15:14:17 crc kubenswrapper[4625]: I1202 15:14:17.056018 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rq7fp"] Dec 02 15:14:17 crc kubenswrapper[4625]: I1202 15:14:17.084146 4625 scope.go:117] "RemoveContainer" containerID="83ddb3b6f2c47b8740bbb27830edbdc34d80ee41eb8018d03a6f2b1d2e9dbeb0" Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.021908 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8drd9" podUID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerName="registry-server" containerID="cri-o://91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5" gracePeriod=2 Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.615540 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.755404 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-utilities\") pod \"20072bfc-cac8-4895-b1b2-abccd5c1235f\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.755515 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v8ln\" (UniqueName: \"kubernetes.io/projected/20072bfc-cac8-4895-b1b2-abccd5c1235f-kube-api-access-2v8ln\") pod \"20072bfc-cac8-4895-b1b2-abccd5c1235f\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.755656 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-catalog-content\") pod \"20072bfc-cac8-4895-b1b2-abccd5c1235f\" (UID: \"20072bfc-cac8-4895-b1b2-abccd5c1235f\") " Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.756628 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-utilities" (OuterVolumeSpecName: "utilities") pod "20072bfc-cac8-4895-b1b2-abccd5c1235f" (UID: "20072bfc-cac8-4895-b1b2-abccd5c1235f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.767180 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20072bfc-cac8-4895-b1b2-abccd5c1235f-kube-api-access-2v8ln" (OuterVolumeSpecName: "kube-api-access-2v8ln") pod "20072bfc-cac8-4895-b1b2-abccd5c1235f" (UID: "20072bfc-cac8-4895-b1b2-abccd5c1235f"). InnerVolumeSpecName "kube-api-access-2v8ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.775693 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20072bfc-cac8-4895-b1b2-abccd5c1235f" (UID: "20072bfc-cac8-4895-b1b2-abccd5c1235f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.858846 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.858905 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v8ln\" (UniqueName: \"kubernetes.io/projected/20072bfc-cac8-4895-b1b2-abccd5c1235f-kube-api-access-2v8ln\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.858924 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20072bfc-cac8-4895-b1b2-abccd5c1235f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:14:18 crc kubenswrapper[4625]: I1202 15:14:18.872843 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" path="/var/lib/kubelet/pods/1e745f82-6ed4-4b13-9bbf-55ef5a4558cb/volumes" Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.034958 4625 generic.go:334] "Generic (PLEG): container finished" podID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerID="91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5" exitCode=0 Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.035076 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8drd9" event={"ID":"20072bfc-cac8-4895-b1b2-abccd5c1235f","Type":"ContainerDied","Data":"91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5"} Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.036703 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8drd9" event={"ID":"20072bfc-cac8-4895-b1b2-abccd5c1235f","Type":"ContainerDied","Data":"8d4f02d84e96006452a76d751004d41f6dcad228d6d3e182c7db14f0e9d86613"} Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.035123 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8drd9" Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.036782 4625 scope.go:117] "RemoveContainer" containerID="91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5" Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.074416 4625 scope.go:117] "RemoveContainer" containerID="927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68" Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.084709 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8drd9"] Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.106668 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8drd9"] Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.108543 4625 scope.go:117] "RemoveContainer" containerID="e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829" Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.212983 4625 scope.go:117] "RemoveContainer" containerID="91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5" Dec 02 15:14:19 crc kubenswrapper[4625]: E1202 15:14:19.219247 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5\": container with ID starting with 91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5 not found: ID does not exist" containerID="91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5" Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.219290 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5"} err="failed to get container status \"91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5\": rpc error: code = NotFound desc = could not find container \"91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5\": container with ID starting with 91738ed093829ce7e41718f4baa1a7ad6179aa0d02fc0b349dc1b77256c4cdd5 not found: ID does not exist" Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.219361 4625 scope.go:117] "RemoveContainer" containerID="927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68" Dec 02 15:14:19 crc kubenswrapper[4625]: E1202 15:14:19.219860 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68\": container with ID starting with 927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68 not found: ID does not exist" containerID="927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68" Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.219886 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68"} err="failed to get container status \"927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68\": rpc error: code = NotFound desc = could not find container \"927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68\": container with ID starting with 927751b77d8c77e1d4c0a3f70f5c8ae1978caf2ec64b8a5855a3ac167833fb68 not found: ID does not exist" Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.219902 4625 scope.go:117] "RemoveContainer" containerID="e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829" Dec 02 15:14:19 crc kubenswrapper[4625]: E1202 15:14:19.220201 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829\": container with ID starting with e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829 not found: ID does not exist" containerID="e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829" Dec 02 15:14:19 crc kubenswrapper[4625]: I1202 15:14:19.220221 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829"} err="failed to get container status \"e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829\": rpc error: code = NotFound desc = could not find container \"e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829\": container with ID starting with e9a709546dbdf64c50bca8533c617656c120df487df51b52e8fe2877c0da5829 not found: ID does not exist" Dec 02 15:14:20 crc kubenswrapper[4625]: I1202 15:14:20.856779 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:14:20 crc kubenswrapper[4625]: E1202 15:14:20.857574 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:14:20 crc kubenswrapper[4625]: I1202 15:14:20.869737 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20072bfc-cac8-4895-b1b2-abccd5c1235f" path="/var/lib/kubelet/pods/20072bfc-cac8-4895-b1b2-abccd5c1235f/volumes" Dec 02 15:14:34 crc kubenswrapper[4625]: I1202 15:14:34.871159 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:14:34 crc kubenswrapper[4625]: E1202 15:14:34.872458 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:14:48 crc kubenswrapper[4625]: I1202 15:14:48.858012 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:14:48 crc kubenswrapper[4625]: E1202 15:14:48.858762 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:14:59 crc kubenswrapper[4625]: I1202 15:14:59.856553 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:14:59 crc kubenswrapper[4625]: E1202 15:14:59.857386 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.176725 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh"] Dec 02 15:15:00 crc kubenswrapper[4625]: E1202 15:15:00.177692 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerName="extract-content" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.177719 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerName="extract-content" Dec 02 15:15:00 crc kubenswrapper[4625]: E1202 15:15:00.177734 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerName="extract-utilities" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.177743 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerName="extract-utilities" Dec 02 15:15:00 crc kubenswrapper[4625]: E1202 15:15:00.177764 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerName="registry-server" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.177772 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerName="registry-server" Dec 02 15:15:00 crc kubenswrapper[4625]: E1202 15:15:00.177786 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerName="extract-utilities" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.177793 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerName="extract-utilities" Dec 02 15:15:00 crc kubenswrapper[4625]: E1202 15:15:00.177825 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerName="registry-server" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.177834 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerName="registry-server" Dec 02 15:15:00 crc kubenswrapper[4625]: E1202 15:15:00.177848 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerName="extract-content" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.177859 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerName="extract-content" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.178118 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="20072bfc-cac8-4895-b1b2-abccd5c1235f" containerName="registry-server" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.178159 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e745f82-6ed4-4b13-9bbf-55ef5a4558cb" containerName="registry-server" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.178910 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.182616 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.189836 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.241675 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh"] Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.257018 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/253e062d-7568-44a0-af20-ce62f3428c27-secret-volume\") pod \"collect-profiles-29411475-7pbfh\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.257120 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/253e062d-7568-44a0-af20-ce62f3428c27-config-volume\") pod \"collect-profiles-29411475-7pbfh\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.257177 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4f5\" (UniqueName: \"kubernetes.io/projected/253e062d-7568-44a0-af20-ce62f3428c27-kube-api-access-9v4f5\") pod \"collect-profiles-29411475-7pbfh\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.363892 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/253e062d-7568-44a0-af20-ce62f3428c27-config-volume\") pod \"collect-profiles-29411475-7pbfh\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.364006 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4f5\" (UniqueName: \"kubernetes.io/projected/253e062d-7568-44a0-af20-ce62f3428c27-kube-api-access-9v4f5\") pod \"collect-profiles-29411475-7pbfh\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.364099 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/253e062d-7568-44a0-af20-ce62f3428c27-secret-volume\") pod \"collect-profiles-29411475-7pbfh\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.366209 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/253e062d-7568-44a0-af20-ce62f3428c27-config-volume\") pod \"collect-profiles-29411475-7pbfh\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.402185 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/253e062d-7568-44a0-af20-ce62f3428c27-secret-volume\") pod \"collect-profiles-29411475-7pbfh\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.425550 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4f5\" (UniqueName: \"kubernetes.io/projected/253e062d-7568-44a0-af20-ce62f3428c27-kube-api-access-9v4f5\") pod \"collect-profiles-29411475-7pbfh\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:00 crc kubenswrapper[4625]: I1202 15:15:00.531700 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:01 crc kubenswrapper[4625]: I1202 15:15:01.178653 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh"] Dec 02 15:15:01 crc kubenswrapper[4625]: W1202 15:15:01.214126 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod253e062d_7568_44a0_af20_ce62f3428c27.slice/crio-1a7368c55bf92a5dccf864ad9e3103891569a4cea86f07da3dd862f1f20b881e WatchSource:0}: Error finding container 1a7368c55bf92a5dccf864ad9e3103891569a4cea86f07da3dd862f1f20b881e: Status 404 returned error can't find the container with id 1a7368c55bf92a5dccf864ad9e3103891569a4cea86f07da3dd862f1f20b881e Dec 02 15:15:01 crc kubenswrapper[4625]: I1202 15:15:01.578390 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" event={"ID":"253e062d-7568-44a0-af20-ce62f3428c27","Type":"ContainerStarted","Data":"2335750fa2b771765eb0aa76c72a85a438ddea9c84f780819aa526de45e46e5a"} Dec 02 15:15:01 crc kubenswrapper[4625]: I1202 15:15:01.578461 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" event={"ID":"253e062d-7568-44a0-af20-ce62f3428c27","Type":"ContainerStarted","Data":"1a7368c55bf92a5dccf864ad9e3103891569a4cea86f07da3dd862f1f20b881e"} Dec 02 15:15:01 crc kubenswrapper[4625]: I1202 15:15:01.618412 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" podStartSLOduration=1.618385661 podStartE2EDuration="1.618385661s" podCreationTimestamp="2025-12-02 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:15:01.617072425 +0000 UTC m=+5457.579249500" watchObservedRunningTime="2025-12-02 15:15:01.618385661 +0000 UTC m=+5457.580562746" Dec 02 15:15:02 crc kubenswrapper[4625]: I1202 15:15:02.592599 4625 generic.go:334] "Generic (PLEG): container finished" podID="253e062d-7568-44a0-af20-ce62f3428c27" containerID="2335750fa2b771765eb0aa76c72a85a438ddea9c84f780819aa526de45e46e5a" exitCode=0 Dec 02 15:15:02 crc kubenswrapper[4625]: I1202 15:15:02.592690 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" event={"ID":"253e062d-7568-44a0-af20-ce62f3428c27","Type":"ContainerDied","Data":"2335750fa2b771765eb0aa76c72a85a438ddea9c84f780819aa526de45e46e5a"} Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.022051 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.084738 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v4f5\" (UniqueName: \"kubernetes.io/projected/253e062d-7568-44a0-af20-ce62f3428c27-kube-api-access-9v4f5\") pod \"253e062d-7568-44a0-af20-ce62f3428c27\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.084843 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/253e062d-7568-44a0-af20-ce62f3428c27-secret-volume\") pod \"253e062d-7568-44a0-af20-ce62f3428c27\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.084964 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/253e062d-7568-44a0-af20-ce62f3428c27-config-volume\") pod \"253e062d-7568-44a0-af20-ce62f3428c27\" (UID: \"253e062d-7568-44a0-af20-ce62f3428c27\") " Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.086778 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253e062d-7568-44a0-af20-ce62f3428c27-config-volume" (OuterVolumeSpecName: "config-volume") pod "253e062d-7568-44a0-af20-ce62f3428c27" (UID: "253e062d-7568-44a0-af20-ce62f3428c27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.094976 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253e062d-7568-44a0-af20-ce62f3428c27-kube-api-access-9v4f5" (OuterVolumeSpecName: "kube-api-access-9v4f5") pod "253e062d-7568-44a0-af20-ce62f3428c27" (UID: "253e062d-7568-44a0-af20-ce62f3428c27"). InnerVolumeSpecName "kube-api-access-9v4f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.105497 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/253e062d-7568-44a0-af20-ce62f3428c27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "253e062d-7568-44a0-af20-ce62f3428c27" (UID: "253e062d-7568-44a0-af20-ce62f3428c27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.187890 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v4f5\" (UniqueName: \"kubernetes.io/projected/253e062d-7568-44a0-af20-ce62f3428c27-kube-api-access-9v4f5\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.187931 4625 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/253e062d-7568-44a0-af20-ce62f3428c27-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.187943 4625 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/253e062d-7568-44a0-af20-ce62f3428c27-config-volume\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.281452 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp"] Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.289912 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29411430-mkzbp"] Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.618596 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" event={"ID":"253e062d-7568-44a0-af20-ce62f3428c27","Type":"ContainerDied","Data":"1a7368c55bf92a5dccf864ad9e3103891569a4cea86f07da3dd862f1f20b881e"} Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.618660 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a7368c55bf92a5dccf864ad9e3103891569a4cea86f07da3dd862f1f20b881e" Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.618695 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29411475-7pbfh" Dec 02 15:15:04 crc kubenswrapper[4625]: I1202 15:15:04.876938 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3dd164e-dc52-4dee-afe2-4042f69ffa85" path="/var/lib/kubelet/pods/f3dd164e-dc52-4dee-afe2-4042f69ffa85/volumes" Dec 02 15:15:13 crc kubenswrapper[4625]: I1202 15:15:13.856745 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:15:13 crc kubenswrapper[4625]: E1202 15:15:13.857462 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:15:28 crc kubenswrapper[4625]: I1202 15:15:28.856452 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:15:28 crc kubenswrapper[4625]: E1202 15:15:28.857635 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.328715 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8hz6p"] Dec 02 15:15:30 crc kubenswrapper[4625]: E1202 15:15:30.329455 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253e062d-7568-44a0-af20-ce62f3428c27" containerName="collect-profiles" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.329474 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="253e062d-7568-44a0-af20-ce62f3428c27" containerName="collect-profiles" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.329730 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="253e062d-7568-44a0-af20-ce62f3428c27" containerName="collect-profiles" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.331230 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.345213 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8hz6p"] Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.389907 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-utilities\") pod \"community-operators-8hz6p\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.390058 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-catalog-content\") pod \"community-operators-8hz6p\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.390100 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976dv\" (UniqueName: \"kubernetes.io/projected/66b04698-e650-4661-ab95-ab816172f360-kube-api-access-976dv\") pod \"community-operators-8hz6p\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.492122 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-utilities\") pod \"community-operators-8hz6p\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.492597 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-utilities\") pod \"community-operators-8hz6p\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.492761 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-catalog-content\") pod \"community-operators-8hz6p\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.492903 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-976dv\" (UniqueName: \"kubernetes.io/projected/66b04698-e650-4661-ab95-ab816172f360-kube-api-access-976dv\") pod \"community-operators-8hz6p\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.493446 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-catalog-content\") pod \"community-operators-8hz6p\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.513916 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-976dv\" (UniqueName: \"kubernetes.io/projected/66b04698-e650-4661-ab95-ab816172f360-kube-api-access-976dv\") pod \"community-operators-8hz6p\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:30 crc kubenswrapper[4625]: I1202 15:15:30.681430 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:31 crc kubenswrapper[4625]: I1202 15:15:31.409358 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8hz6p"] Dec 02 15:15:32 crc kubenswrapper[4625]: I1202 15:15:32.716153 4625 generic.go:334] "Generic (PLEG): container finished" podID="66b04698-e650-4661-ab95-ab816172f360" containerID="cfbfcd1f9804d812169cedd0f492e800051df97e6415cf8c40463433a7547a07" exitCode=0 Dec 02 15:15:32 crc kubenswrapper[4625]: I1202 15:15:32.716735 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hz6p" event={"ID":"66b04698-e650-4661-ab95-ab816172f360","Type":"ContainerDied","Data":"cfbfcd1f9804d812169cedd0f492e800051df97e6415cf8c40463433a7547a07"} Dec 02 15:15:32 crc kubenswrapper[4625]: I1202 15:15:32.716792 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hz6p" event={"ID":"66b04698-e650-4661-ab95-ab816172f360","Type":"ContainerStarted","Data":"b2af4b081f40cb79659749364ba08e0f1a1d147b3c6aeecdd2340e793282ce70"} Dec 02 15:15:35 crc kubenswrapper[4625]: I1202 15:15:35.758435 4625 generic.go:334] "Generic (PLEG): container finished" podID="66b04698-e650-4661-ab95-ab816172f360" containerID="4c0305d9f3f24d4e84d89a0a6448ec829fcb82b356b5e436c319e966326288d8" exitCode=0 Dec 02 15:15:35 crc kubenswrapper[4625]: I1202 15:15:35.758542 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hz6p" event={"ID":"66b04698-e650-4661-ab95-ab816172f360","Type":"ContainerDied","Data":"4c0305d9f3f24d4e84d89a0a6448ec829fcb82b356b5e436c319e966326288d8"} Dec 02 15:15:37 crc kubenswrapper[4625]: I1202 15:15:37.805329 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hz6p" event={"ID":"66b04698-e650-4661-ab95-ab816172f360","Type":"ContainerStarted","Data":"c15777dd9d2fc669ac253ba29474e539741566f47a451720bb0f808532bb3d65"} Dec 02 15:15:37 crc kubenswrapper[4625]: I1202 15:15:37.830236 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8hz6p" podStartSLOduration=4.216404682 podStartE2EDuration="7.830214786s" podCreationTimestamp="2025-12-02 15:15:30 +0000 UTC" firstStartedPulling="2025-12-02 15:15:32.721605828 +0000 UTC m=+5488.683782903" lastFinishedPulling="2025-12-02 15:15:36.335415932 +0000 UTC m=+5492.297593007" observedRunningTime="2025-12-02 15:15:37.825809257 +0000 UTC m=+5493.787986332" watchObservedRunningTime="2025-12-02 15:15:37.830214786 +0000 UTC m=+5493.792391861" Dec 02 15:15:39 crc kubenswrapper[4625]: I1202 15:15:39.186806 4625 scope.go:117] "RemoveContainer" containerID="e8c1f81ddf2b14b5d1b42073c2afd5a53dbd24a26d5329cc0b5f0e16b35cca83" Dec 02 15:15:39 crc kubenswrapper[4625]: I1202 15:15:39.857393 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:15:39 crc kubenswrapper[4625]: E1202 15:15:39.857818 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:15:40 crc kubenswrapper[4625]: I1202 15:15:40.682924 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:40 crc kubenswrapper[4625]: I1202 15:15:40.683032 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:40 crc kubenswrapper[4625]: I1202 15:15:40.781137 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:50 crc kubenswrapper[4625]: I1202 15:15:50.762476 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:50 crc kubenswrapper[4625]: I1202 15:15:50.850807 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8hz6p"] Dec 02 15:15:50 crc kubenswrapper[4625]: I1202 15:15:50.962945 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8hz6p" podUID="66b04698-e650-4661-ab95-ab816172f360" containerName="registry-server" containerID="cri-o://c15777dd9d2fc669ac253ba29474e539741566f47a451720bb0f808532bb3d65" gracePeriod=2 Dec 02 15:15:51 crc kubenswrapper[4625]: I1202 15:15:51.973871 4625 generic.go:334] "Generic (PLEG): container finished" podID="66b04698-e650-4661-ab95-ab816172f360" containerID="c15777dd9d2fc669ac253ba29474e539741566f47a451720bb0f808532bb3d65" exitCode=0 Dec 02 15:15:51 crc kubenswrapper[4625]: I1202 15:15:51.974130 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hz6p" event={"ID":"66b04698-e650-4661-ab95-ab816172f360","Type":"ContainerDied","Data":"c15777dd9d2fc669ac253ba29474e539741566f47a451720bb0f808532bb3d65"} Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.286651 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.381745 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-976dv\" (UniqueName: \"kubernetes.io/projected/66b04698-e650-4661-ab95-ab816172f360-kube-api-access-976dv\") pod \"66b04698-e650-4661-ab95-ab816172f360\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.381851 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-utilities\") pod \"66b04698-e650-4661-ab95-ab816172f360\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.381879 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-catalog-content\") pod \"66b04698-e650-4661-ab95-ab816172f360\" (UID: \"66b04698-e650-4661-ab95-ab816172f360\") " Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.387858 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b04698-e650-4661-ab95-ab816172f360-kube-api-access-976dv" (OuterVolumeSpecName: "kube-api-access-976dv") pod "66b04698-e650-4661-ab95-ab816172f360" (UID: "66b04698-e650-4661-ab95-ab816172f360"). InnerVolumeSpecName "kube-api-access-976dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.394278 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-utilities" (OuterVolumeSpecName: "utilities") pod "66b04698-e650-4661-ab95-ab816172f360" (UID: "66b04698-e650-4661-ab95-ab816172f360"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.431149 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66b04698-e650-4661-ab95-ab816172f360" (UID: "66b04698-e650-4661-ab95-ab816172f360"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.485501 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-976dv\" (UniqueName: \"kubernetes.io/projected/66b04698-e650-4661-ab95-ab816172f360-kube-api-access-976dv\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.485868 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.486003 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66b04698-e650-4661-ab95-ab816172f360-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.857616 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.985345 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hz6p" event={"ID":"66b04698-e650-4661-ab95-ab816172f360","Type":"ContainerDied","Data":"b2af4b081f40cb79659749364ba08e0f1a1d147b3c6aeecdd2340e793282ce70"} Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.985412 4625 scope.go:117] "RemoveContainer" containerID="c15777dd9d2fc669ac253ba29474e539741566f47a451720bb0f808532bb3d65" Dec 02 15:15:52 crc kubenswrapper[4625]: I1202 15:15:52.986413 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hz6p" Dec 02 15:15:53 crc kubenswrapper[4625]: I1202 15:15:53.016297 4625 scope.go:117] "RemoveContainer" containerID="4c0305d9f3f24d4e84d89a0a6448ec829fcb82b356b5e436c319e966326288d8" Dec 02 15:15:53 crc kubenswrapper[4625]: I1202 15:15:53.020057 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8hz6p"] Dec 02 15:15:53 crc kubenswrapper[4625]: I1202 15:15:53.032613 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8hz6p"] Dec 02 15:15:53 crc kubenswrapper[4625]: I1202 15:15:53.044023 4625 scope.go:117] "RemoveContainer" containerID="cfbfcd1f9804d812169cedd0f492e800051df97e6415cf8c40463433a7547a07" Dec 02 15:15:53 crc kubenswrapper[4625]: I1202 15:15:53.998740 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"e879b6051207e9e1241754852b2a846deb0a2c61815c504ee18197eab67afcda"} Dec 02 15:15:54 crc kubenswrapper[4625]: I1202 15:15:54.872267 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b04698-e650-4661-ab95-ab816172f360" path="/var/lib/kubelet/pods/66b04698-e650-4661-ab95-ab816172f360/volumes" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.136065 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7zl6r/must-gather-xcs4w"] Dec 02 15:16:23 crc kubenswrapper[4625]: E1202 15:16:23.140481 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b04698-e650-4661-ab95-ab816172f360" containerName="extract-content" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.140670 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b04698-e650-4661-ab95-ab816172f360" containerName="extract-content" Dec 02 15:16:23 crc kubenswrapper[4625]: E1202 15:16:23.140738 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b04698-e650-4661-ab95-ab816172f360" containerName="extract-utilities" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.140790 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b04698-e650-4661-ab95-ab816172f360" containerName="extract-utilities" Dec 02 15:16:23 crc kubenswrapper[4625]: E1202 15:16:23.140854 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b04698-e650-4661-ab95-ab816172f360" containerName="registry-server" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.140907 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b04698-e650-4661-ab95-ab816172f360" containerName="registry-server" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.141247 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b04698-e650-4661-ab95-ab816172f360" containerName="registry-server" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.142495 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/must-gather-xcs4w" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.148478 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7zl6r"/"openshift-service-ca.crt" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.151126 4625 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7zl6r"/"kube-root-ca.crt" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.161862 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7zl6r/must-gather-xcs4w"] Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.326262 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7128569a-269e-4043-8119-7a880bf03aa0-must-gather-output\") pod \"must-gather-xcs4w\" (UID: \"7128569a-269e-4043-8119-7a880bf03aa0\") " pod="openshift-must-gather-7zl6r/must-gather-xcs4w" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.326439 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9tsx\" (UniqueName: \"kubernetes.io/projected/7128569a-269e-4043-8119-7a880bf03aa0-kube-api-access-n9tsx\") pod \"must-gather-xcs4w\" (UID: \"7128569a-269e-4043-8119-7a880bf03aa0\") " pod="openshift-must-gather-7zl6r/must-gather-xcs4w" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.429065 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9tsx\" (UniqueName: \"kubernetes.io/projected/7128569a-269e-4043-8119-7a880bf03aa0-kube-api-access-n9tsx\") pod \"must-gather-xcs4w\" (UID: \"7128569a-269e-4043-8119-7a880bf03aa0\") " pod="openshift-must-gather-7zl6r/must-gather-xcs4w" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.429575 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7128569a-269e-4043-8119-7a880bf03aa0-must-gather-output\") pod \"must-gather-xcs4w\" (UID: \"7128569a-269e-4043-8119-7a880bf03aa0\") " pod="openshift-must-gather-7zl6r/must-gather-xcs4w" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.430163 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7128569a-269e-4043-8119-7a880bf03aa0-must-gather-output\") pod \"must-gather-xcs4w\" (UID: \"7128569a-269e-4043-8119-7a880bf03aa0\") " pod="openshift-must-gather-7zl6r/must-gather-xcs4w" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.451413 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9tsx\" (UniqueName: \"kubernetes.io/projected/7128569a-269e-4043-8119-7a880bf03aa0-kube-api-access-n9tsx\") pod \"must-gather-xcs4w\" (UID: \"7128569a-269e-4043-8119-7a880bf03aa0\") " pod="openshift-must-gather-7zl6r/must-gather-xcs4w" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.469210 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/must-gather-xcs4w" Dec 02 15:16:23 crc kubenswrapper[4625]: I1202 15:16:23.984213 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7zl6r/must-gather-xcs4w"] Dec 02 15:16:24 crc kubenswrapper[4625]: I1202 15:16:24.529178 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/must-gather-xcs4w" event={"ID":"7128569a-269e-4043-8119-7a880bf03aa0","Type":"ContainerStarted","Data":"b3a498a3c61b8e912869d5a113ee7cdd0e5de71dbb4a28bdb6e826b5604398e1"} Dec 02 15:16:24 crc kubenswrapper[4625]: I1202 15:16:24.529669 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/must-gather-xcs4w" event={"ID":"7128569a-269e-4043-8119-7a880bf03aa0","Type":"ContainerStarted","Data":"ef4d4e44bb860600b4e57f7b7d4cea5e7210d0ec637a9db4e6c075ac7a954d8d"} Dec 02 15:16:25 crc kubenswrapper[4625]: I1202 15:16:25.545258 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/must-gather-xcs4w" event={"ID":"7128569a-269e-4043-8119-7a880bf03aa0","Type":"ContainerStarted","Data":"64d2cd8bcc6890a636a1d64c74eb3000c0dc8bf89c5344c407e1ff9f12aaa0aa"} Dec 02 15:16:25 crc kubenswrapper[4625]: I1202 15:16:25.568505 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7zl6r/must-gather-xcs4w" podStartSLOduration=2.568473649 podStartE2EDuration="2.568473649s" podCreationTimestamp="2025-12-02 15:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:25.564809779 +0000 UTC m=+5541.526986854" watchObservedRunningTime="2025-12-02 15:16:25.568473649 +0000 UTC m=+5541.530650724" Dec 02 15:16:29 crc kubenswrapper[4625]: I1202 15:16:29.247284 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7zl6r/crc-debug-gwvnb"] Dec 02 15:16:29 crc kubenswrapper[4625]: I1202 15:16:29.252742 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" Dec 02 15:16:29 crc kubenswrapper[4625]: I1202 15:16:29.255657 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7zl6r"/"default-dockercfg-8qwkd" Dec 02 15:16:29 crc kubenswrapper[4625]: I1202 15:16:29.386630 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43801076-d154-4bd2-af97-1238b97699b8-host\") pod \"crc-debug-gwvnb\" (UID: \"43801076-d154-4bd2-af97-1238b97699b8\") " pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" Dec 02 15:16:29 crc kubenswrapper[4625]: I1202 15:16:29.386741 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb9sc\" (UniqueName: \"kubernetes.io/projected/43801076-d154-4bd2-af97-1238b97699b8-kube-api-access-gb9sc\") pod \"crc-debug-gwvnb\" (UID: \"43801076-d154-4bd2-af97-1238b97699b8\") " pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" Dec 02 15:16:29 crc kubenswrapper[4625]: I1202 15:16:29.488111 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43801076-d154-4bd2-af97-1238b97699b8-host\") pod \"crc-debug-gwvnb\" (UID: \"43801076-d154-4bd2-af97-1238b97699b8\") " pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" Dec 02 15:16:29 crc kubenswrapper[4625]: I1202 15:16:29.488564 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb9sc\" (UniqueName: \"kubernetes.io/projected/43801076-d154-4bd2-af97-1238b97699b8-kube-api-access-gb9sc\") pod \"crc-debug-gwvnb\" (UID: \"43801076-d154-4bd2-af97-1238b97699b8\") " pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" Dec 02 15:16:29 crc kubenswrapper[4625]: I1202 15:16:29.489044 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43801076-d154-4bd2-af97-1238b97699b8-host\") pod \"crc-debug-gwvnb\" (UID: \"43801076-d154-4bd2-af97-1238b97699b8\") " pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" Dec 02 15:16:29 crc kubenswrapper[4625]: I1202 15:16:29.515565 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb9sc\" (UniqueName: \"kubernetes.io/projected/43801076-d154-4bd2-af97-1238b97699b8-kube-api-access-gb9sc\") pod \"crc-debug-gwvnb\" (UID: \"43801076-d154-4bd2-af97-1238b97699b8\") " pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" Dec 02 15:16:29 crc kubenswrapper[4625]: I1202 15:16:29.578648 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" Dec 02 15:16:29 crc kubenswrapper[4625]: W1202 15:16:29.616577 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43801076_d154_4bd2_af97_1238b97699b8.slice/crio-1e94f7b094fdaf72d00255486f4e80cb8ee977c40d8735f1b097a85a14819985 WatchSource:0}: Error finding container 1e94f7b094fdaf72d00255486f4e80cb8ee977c40d8735f1b097a85a14819985: Status 404 returned error can't find the container with id 1e94f7b094fdaf72d00255486f4e80cb8ee977c40d8735f1b097a85a14819985 Dec 02 15:16:30 crc kubenswrapper[4625]: I1202 15:16:30.607517 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" event={"ID":"43801076-d154-4bd2-af97-1238b97699b8","Type":"ContainerStarted","Data":"c4cbf471f6e07730427d2d2bd835919983a1d91cdbf49e6bd6abb20fc4a7ccdd"} Dec 02 15:16:30 crc kubenswrapper[4625]: I1202 15:16:30.608065 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" event={"ID":"43801076-d154-4bd2-af97-1238b97699b8","Type":"ContainerStarted","Data":"1e94f7b094fdaf72d00255486f4e80cb8ee977c40d8735f1b097a85a14819985"} Dec 02 15:16:30 crc kubenswrapper[4625]: I1202 15:16:30.625926 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" podStartSLOduration=1.625896232 podStartE2EDuration="1.625896232s" podCreationTimestamp="2025-12-02 15:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-02 15:16:30.622567772 +0000 UTC m=+5546.584744847" watchObservedRunningTime="2025-12-02 15:16:30.625896232 +0000 UTC m=+5546.588073307" Dec 02 15:17:22 crc kubenswrapper[4625]: I1202 15:17:22.246349 4625 generic.go:334] "Generic (PLEG): container finished" podID="43801076-d154-4bd2-af97-1238b97699b8" containerID="c4cbf471f6e07730427d2d2bd835919983a1d91cdbf49e6bd6abb20fc4a7ccdd" exitCode=0 Dec 02 15:17:22 crc kubenswrapper[4625]: I1202 15:17:22.246457 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" event={"ID":"43801076-d154-4bd2-af97-1238b97699b8","Type":"ContainerDied","Data":"c4cbf471f6e07730427d2d2bd835919983a1d91cdbf49e6bd6abb20fc4a7ccdd"} Dec 02 15:17:23 crc kubenswrapper[4625]: I1202 15:17:23.366828 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" Dec 02 15:17:23 crc kubenswrapper[4625]: I1202 15:17:23.415970 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7zl6r/crc-debug-gwvnb"] Dec 02 15:17:23 crc kubenswrapper[4625]: I1202 15:17:23.425730 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7zl6r/crc-debug-gwvnb"] Dec 02 15:17:23 crc kubenswrapper[4625]: I1202 15:17:23.492133 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb9sc\" (UniqueName: \"kubernetes.io/projected/43801076-d154-4bd2-af97-1238b97699b8-kube-api-access-gb9sc\") pod \"43801076-d154-4bd2-af97-1238b97699b8\" (UID: \"43801076-d154-4bd2-af97-1238b97699b8\") " Dec 02 15:17:23 crc kubenswrapper[4625]: I1202 15:17:23.492300 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43801076-d154-4bd2-af97-1238b97699b8-host\") pod \"43801076-d154-4bd2-af97-1238b97699b8\" (UID: \"43801076-d154-4bd2-af97-1238b97699b8\") " Dec 02 15:17:23 crc kubenswrapper[4625]: I1202 15:17:23.492387 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43801076-d154-4bd2-af97-1238b97699b8-host" (OuterVolumeSpecName: "host") pod "43801076-d154-4bd2-af97-1238b97699b8" (UID: "43801076-d154-4bd2-af97-1238b97699b8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:17:23 crc kubenswrapper[4625]: I1202 15:17:23.492828 4625 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43801076-d154-4bd2-af97-1238b97699b8-host\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:23 crc kubenswrapper[4625]: I1202 15:17:23.513404 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43801076-d154-4bd2-af97-1238b97699b8-kube-api-access-gb9sc" (OuterVolumeSpecName: "kube-api-access-gb9sc") pod "43801076-d154-4bd2-af97-1238b97699b8" (UID: "43801076-d154-4bd2-af97-1238b97699b8"). InnerVolumeSpecName "kube-api-access-gb9sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:23 crc kubenswrapper[4625]: I1202 15:17:23.595055 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb9sc\" (UniqueName: \"kubernetes.io/projected/43801076-d154-4bd2-af97-1238b97699b8-kube-api-access-gb9sc\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.269392 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e94f7b094fdaf72d00255486f4e80cb8ee977c40d8735f1b097a85a14819985" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.269503 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-gwvnb" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.653262 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7zl6r/crc-debug-dpw9q"] Dec 02 15:17:24 crc kubenswrapper[4625]: E1202 15:17:24.653725 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43801076-d154-4bd2-af97-1238b97699b8" containerName="container-00" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.653742 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="43801076-d154-4bd2-af97-1238b97699b8" containerName="container-00" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.653999 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="43801076-d154-4bd2-af97-1238b97699b8" containerName="container-00" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.654726 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.657258 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7zl6r"/"default-dockercfg-8qwkd" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.842261 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7mcf\" (UniqueName: \"kubernetes.io/projected/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-kube-api-access-n7mcf\") pod \"crc-debug-dpw9q\" (UID: \"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef\") " pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.842867 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-host\") pod \"crc-debug-dpw9q\" (UID: \"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef\") " pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.870222 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43801076-d154-4bd2-af97-1238b97699b8" path="/var/lib/kubelet/pods/43801076-d154-4bd2-af97-1238b97699b8/volumes" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.945556 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7mcf\" (UniqueName: \"kubernetes.io/projected/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-kube-api-access-n7mcf\") pod \"crc-debug-dpw9q\" (UID: \"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef\") " pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.945650 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-host\") pod \"crc-debug-dpw9q\" (UID: \"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef\") " pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.945783 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-host\") pod \"crc-debug-dpw9q\" (UID: \"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef\") " pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.965264 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7mcf\" (UniqueName: \"kubernetes.io/projected/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-kube-api-access-n7mcf\") pod \"crc-debug-dpw9q\" (UID: \"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef\") " pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" Dec 02 15:17:24 crc kubenswrapper[4625]: I1202 15:17:24.973300 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" Dec 02 15:17:25 crc kubenswrapper[4625]: I1202 15:17:25.283520 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" event={"ID":"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef","Type":"ContainerStarted","Data":"01af01b483c76b886c4b809c115150cf75b92d66554136b7c44eb2a1daeac2d8"} Dec 02 15:17:25 crc kubenswrapper[4625]: I1202 15:17:25.284117 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" event={"ID":"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef","Type":"ContainerStarted","Data":"7d982daf8c0d6b66ccb39f33bc3f4d9c50118c5ffc0168333fdf46f7a6733cd2"} Dec 02 15:17:26 crc kubenswrapper[4625]: I1202 15:17:26.304525 4625 generic.go:334] "Generic (PLEG): container finished" podID="373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef" containerID="01af01b483c76b886c4b809c115150cf75b92d66554136b7c44eb2a1daeac2d8" exitCode=0 Dec 02 15:17:26 crc kubenswrapper[4625]: I1202 15:17:26.304632 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" event={"ID":"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef","Type":"ContainerDied","Data":"01af01b483c76b886c4b809c115150cf75b92d66554136b7c44eb2a1daeac2d8"} Dec 02 15:17:27 crc kubenswrapper[4625]: I1202 15:17:27.425595 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" Dec 02 15:17:27 crc kubenswrapper[4625]: I1202 15:17:27.524288 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-host\") pod \"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef\" (UID: \"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef\") " Dec 02 15:17:27 crc kubenswrapper[4625]: I1202 15:17:27.524678 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7mcf\" (UniqueName: \"kubernetes.io/projected/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-kube-api-access-n7mcf\") pod \"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef\" (UID: \"373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef\") " Dec 02 15:17:27 crc kubenswrapper[4625]: I1202 15:17:27.525535 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-host" (OuterVolumeSpecName: "host") pod "373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef" (UID: "373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:17:27 crc kubenswrapper[4625]: I1202 15:17:27.548680 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-kube-api-access-n7mcf" (OuterVolumeSpecName: "kube-api-access-n7mcf") pod "373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef" (UID: "373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef"). InnerVolumeSpecName "kube-api-access-n7mcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:27 crc kubenswrapper[4625]: I1202 15:17:27.627059 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7mcf\" (UniqueName: \"kubernetes.io/projected/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-kube-api-access-n7mcf\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:27 crc kubenswrapper[4625]: I1202 15:17:27.627290 4625 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef-host\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:27 crc kubenswrapper[4625]: I1202 15:17:27.887755 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7zl6r/crc-debug-dpw9q"] Dec 02 15:17:27 crc kubenswrapper[4625]: I1202 15:17:27.898006 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7zl6r/crc-debug-dpw9q"] Dec 02 15:17:28 crc kubenswrapper[4625]: I1202 15:17:28.328611 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d982daf8c0d6b66ccb39f33bc3f4d9c50118c5ffc0168333fdf46f7a6733cd2" Dec 02 15:17:28 crc kubenswrapper[4625]: I1202 15:17:28.328714 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-dpw9q" Dec 02 15:17:28 crc kubenswrapper[4625]: E1202 15:17:28.636831 4625 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod373547f7_8a5d_4b5e_8f76_da8b8a3bd4ef.slice/crio-7d982daf8c0d6b66ccb39f33bc3f4d9c50118c5ffc0168333fdf46f7a6733cd2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod373547f7_8a5d_4b5e_8f76_da8b8a3bd4ef.slice\": RecentStats: unable to find data in memory cache]" Dec 02 15:17:28 crc kubenswrapper[4625]: I1202 15:17:28.871000 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef" path="/var/lib/kubelet/pods/373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef/volumes" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.101985 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7zl6r/crc-debug-njf6z"] Dec 02 15:17:29 crc kubenswrapper[4625]: E1202 15:17:29.102504 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef" containerName="container-00" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.102531 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef" containerName="container-00" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.102820 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="373547f7-8a5d-4b5e-8f76-da8b8a3bd4ef" containerName="container-00" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.103661 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-njf6z" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.106295 4625 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7zl6r"/"default-dockercfg-8qwkd" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.266919 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c22d0104-d908-4ff8-af1f-39e25dab818c-host\") pod \"crc-debug-njf6z\" (UID: \"c22d0104-d908-4ff8-af1f-39e25dab818c\") " pod="openshift-must-gather-7zl6r/crc-debug-njf6z" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.267702 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57ksw\" (UniqueName: \"kubernetes.io/projected/c22d0104-d908-4ff8-af1f-39e25dab818c-kube-api-access-57ksw\") pod \"crc-debug-njf6z\" (UID: \"c22d0104-d908-4ff8-af1f-39e25dab818c\") " pod="openshift-must-gather-7zl6r/crc-debug-njf6z" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.370138 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c22d0104-d908-4ff8-af1f-39e25dab818c-host\") pod \"crc-debug-njf6z\" (UID: \"c22d0104-d908-4ff8-af1f-39e25dab818c\") " pod="openshift-must-gather-7zl6r/crc-debug-njf6z" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.370284 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57ksw\" (UniqueName: \"kubernetes.io/projected/c22d0104-d908-4ff8-af1f-39e25dab818c-kube-api-access-57ksw\") pod \"crc-debug-njf6z\" (UID: \"c22d0104-d908-4ff8-af1f-39e25dab818c\") " pod="openshift-must-gather-7zl6r/crc-debug-njf6z" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.370595 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c22d0104-d908-4ff8-af1f-39e25dab818c-host\") pod \"crc-debug-njf6z\" (UID: \"c22d0104-d908-4ff8-af1f-39e25dab818c\") " pod="openshift-must-gather-7zl6r/crc-debug-njf6z" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.408111 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57ksw\" (UniqueName: \"kubernetes.io/projected/c22d0104-d908-4ff8-af1f-39e25dab818c-kube-api-access-57ksw\") pod \"crc-debug-njf6z\" (UID: \"c22d0104-d908-4ff8-af1f-39e25dab818c\") " pod="openshift-must-gather-7zl6r/crc-debug-njf6z" Dec 02 15:17:29 crc kubenswrapper[4625]: I1202 15:17:29.425491 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-njf6z" Dec 02 15:17:30 crc kubenswrapper[4625]: I1202 15:17:30.368384 4625 generic.go:334] "Generic (PLEG): container finished" podID="c22d0104-d908-4ff8-af1f-39e25dab818c" containerID="972cdbdc1ab0684cb68f9a86029a7c3a16c3264b060dcf9d0349d682ea970253" exitCode=0 Dec 02 15:17:30 crc kubenswrapper[4625]: I1202 15:17:30.368431 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/crc-debug-njf6z" event={"ID":"c22d0104-d908-4ff8-af1f-39e25dab818c","Type":"ContainerDied","Data":"972cdbdc1ab0684cb68f9a86029a7c3a16c3264b060dcf9d0349d682ea970253"} Dec 02 15:17:30 crc kubenswrapper[4625]: I1202 15:17:30.369025 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/crc-debug-njf6z" event={"ID":"c22d0104-d908-4ff8-af1f-39e25dab818c","Type":"ContainerStarted","Data":"54fd776800e2e916ca8eed67dbfd1ebc60f7f606ea4c10a0803889804439a202"} Dec 02 15:17:30 crc kubenswrapper[4625]: I1202 15:17:30.429658 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7zl6r/crc-debug-njf6z"] Dec 02 15:17:30 crc kubenswrapper[4625]: I1202 15:17:30.441374 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7zl6r/crc-debug-njf6z"] Dec 02 15:17:31 crc kubenswrapper[4625]: I1202 15:17:31.586519 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-njf6z" Dec 02 15:17:31 crc kubenswrapper[4625]: I1202 15:17:31.762113 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57ksw\" (UniqueName: \"kubernetes.io/projected/c22d0104-d908-4ff8-af1f-39e25dab818c-kube-api-access-57ksw\") pod \"c22d0104-d908-4ff8-af1f-39e25dab818c\" (UID: \"c22d0104-d908-4ff8-af1f-39e25dab818c\") " Dec 02 15:17:31 crc kubenswrapper[4625]: I1202 15:17:31.762752 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c22d0104-d908-4ff8-af1f-39e25dab818c-host\") pod \"c22d0104-d908-4ff8-af1f-39e25dab818c\" (UID: \"c22d0104-d908-4ff8-af1f-39e25dab818c\") " Dec 02 15:17:31 crc kubenswrapper[4625]: I1202 15:17:31.762893 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c22d0104-d908-4ff8-af1f-39e25dab818c-host" (OuterVolumeSpecName: "host") pod "c22d0104-d908-4ff8-af1f-39e25dab818c" (UID: "c22d0104-d908-4ff8-af1f-39e25dab818c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 02 15:17:31 crc kubenswrapper[4625]: I1202 15:17:31.764517 4625 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c22d0104-d908-4ff8-af1f-39e25dab818c-host\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:31 crc kubenswrapper[4625]: I1202 15:17:31.769343 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22d0104-d908-4ff8-af1f-39e25dab818c-kube-api-access-57ksw" (OuterVolumeSpecName: "kube-api-access-57ksw") pod "c22d0104-d908-4ff8-af1f-39e25dab818c" (UID: "c22d0104-d908-4ff8-af1f-39e25dab818c"). InnerVolumeSpecName "kube-api-access-57ksw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:17:31 crc kubenswrapper[4625]: I1202 15:17:31.867328 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57ksw\" (UniqueName: \"kubernetes.io/projected/c22d0104-d908-4ff8-af1f-39e25dab818c-kube-api-access-57ksw\") on node \"crc\" DevicePath \"\"" Dec 02 15:17:32 crc kubenswrapper[4625]: I1202 15:17:32.395028 4625 scope.go:117] "RemoveContainer" containerID="972cdbdc1ab0684cb68f9a86029a7c3a16c3264b060dcf9d0349d682ea970253" Dec 02 15:17:32 crc kubenswrapper[4625]: I1202 15:17:32.395265 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/crc-debug-njf6z" Dec 02 15:17:32 crc kubenswrapper[4625]: I1202 15:17:32.868530 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22d0104-d908-4ff8-af1f-39e25dab818c" path="/var/lib/kubelet/pods/c22d0104-d908-4ff8-af1f-39e25dab818c/volumes" Dec 02 15:18:16 crc kubenswrapper[4625]: I1202 15:18:16.513391 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f988fb4d-rk87d_fc88c0ad-8893-4168-bf0c-e9ed829f1b62/barbican-api/0.log" Dec 02 15:18:16 crc kubenswrapper[4625]: I1202 15:18:16.718525 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f988fb4d-rk87d_fc88c0ad-8893-4168-bf0c-e9ed829f1b62/barbican-api-log/0.log" Dec 02 15:18:16 crc kubenswrapper[4625]: I1202 15:18:16.761339 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d57b47bd4-2hxfs_ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d/barbican-keystone-listener/0.log" Dec 02 15:18:16 crc kubenswrapper[4625]: I1202 15:18:16.970130 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d57b47bd4-2hxfs_ceddbf80-bc5f-4c17-b475-9ec52d7a1b1d/barbican-keystone-listener-log/0.log" Dec 02 15:18:17 crc kubenswrapper[4625]: I1202 15:18:17.109444 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68c4cddcdc-kxpt7_183dcad1-443e-47e0-bc13-d98d7c316069/barbican-worker/0.log" Dec 02 15:18:17 crc kubenswrapper[4625]: I1202 15:18:17.164149 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68c4cddcdc-kxpt7_183dcad1-443e-47e0-bc13-d98d7c316069/barbican-worker-log/0.log" Dec 02 15:18:17 crc kubenswrapper[4625]: I1202 15:18:17.450871 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-rk9sb_b31c21d6-4087-4521-8566-14b2eeabb679/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:17 crc kubenswrapper[4625]: I1202 15:18:17.575944 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3bd330e7-048e-4237-a165-25f8c3bf6bc3/ceilometer-central-agent/0.log" Dec 02 15:18:18 crc kubenswrapper[4625]: I1202 15:18:18.101104 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3bd330e7-048e-4237-a165-25f8c3bf6bc3/ceilometer-notification-agent/0.log" Dec 02 15:18:18 crc kubenswrapper[4625]: I1202 15:18:18.327126 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3bd330e7-048e-4237-a165-25f8c3bf6bc3/proxy-httpd/0.log" Dec 02 15:18:18 crc kubenswrapper[4625]: I1202 15:18:18.404386 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3bd330e7-048e-4237-a165-25f8c3bf6bc3/sg-core/0.log" Dec 02 15:18:18 crc kubenswrapper[4625]: I1202 15:18:18.602776 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d2c435c-5496-4ec7-ac3f-eab4e5728204/cinder-api/0.log" Dec 02 15:18:18 crc kubenswrapper[4625]: I1202 15:18:18.752244 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3d2c435c-5496-4ec7-ac3f-eab4e5728204/cinder-api-log/0.log" Dec 02 15:18:18 crc kubenswrapper[4625]: I1202 15:18:18.794449 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9a4032f9-0bbb-4491-9f59-8b6006133dd6/cinder-scheduler/0.log" Dec 02 15:18:19 crc kubenswrapper[4625]: I1202 15:18:19.067681 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9a4032f9-0bbb-4491-9f59-8b6006133dd6/probe/0.log" Dec 02 15:18:19 crc kubenswrapper[4625]: I1202 15:18:19.203256 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5zmlw_4ab12756-db3d-4271-9017-d059eb68113e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:19 crc kubenswrapper[4625]: I1202 15:18:19.271420 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:18:19 crc kubenswrapper[4625]: I1202 15:18:19.271499 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:18:19 crc kubenswrapper[4625]: I1202 15:18:19.399684 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xqtlm_acb5fca1-1ef0-4678-8323-5a42790a0998/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:19 crc kubenswrapper[4625]: I1202 15:18:19.606524 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54ffdb7d8c-skjm6_5908f5de-9af5-4cde-abf8-5959a6c8648e/init/0.log" Dec 02 15:18:19 crc kubenswrapper[4625]: I1202 15:18:19.831826 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54ffdb7d8c-skjm6_5908f5de-9af5-4cde-abf8-5959a6c8648e/init/0.log" Dec 02 15:18:20 crc kubenswrapper[4625]: I1202 15:18:20.073589 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gmtjg_1b3affb2-6aa8-445a-81cd-6bdb90c31f45/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:20 crc kubenswrapper[4625]: I1202 15:18:20.099524 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54ffdb7d8c-skjm6_5908f5de-9af5-4cde-abf8-5959a6c8648e/dnsmasq-dns/0.log" Dec 02 15:18:20 crc kubenswrapper[4625]: I1202 15:18:20.358956 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73/glance-log/0.log" Dec 02 15:18:20 crc kubenswrapper[4625]: I1202 15:18:20.488199 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7037c9dd-e07f-4013-9e9b-f4ff1bfbdd73/glance-httpd/0.log" Dec 02 15:18:20 crc kubenswrapper[4625]: I1202 15:18:20.612290 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7831eff5-dd90-4e3d-b6ec-86ec291099f2/glance-httpd/0.log" Dec 02 15:18:20 crc kubenswrapper[4625]: I1202 15:18:20.752647 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7831eff5-dd90-4e3d-b6ec-86ec291099f2/glance-log/0.log" Dec 02 15:18:20 crc kubenswrapper[4625]: I1202 15:18:20.870938 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7dc4db5bfb-zbs4l_92339196-3d33-4b76-9ba2-81e1a8373e84/horizon/1.log" Dec 02 15:18:21 crc kubenswrapper[4625]: I1202 15:18:21.012445 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7dc4db5bfb-zbs4l_92339196-3d33-4b76-9ba2-81e1a8373e84/horizon/0.log" Dec 02 15:18:21 crc kubenswrapper[4625]: I1202 15:18:21.278811 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9jvx7_a7c36e4d-5e3c-4036-abef-01a4eb799665/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:21 crc kubenswrapper[4625]: I1202 15:18:21.427803 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ttw4p_6b962e31-3a28-4083-af18-2c1b6f53b3b3/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:21 crc kubenswrapper[4625]: I1202 15:18:21.659821 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7dc4db5bfb-zbs4l_92339196-3d33-4b76-9ba2-81e1a8373e84/horizon-log/0.log" Dec 02 15:18:22 crc kubenswrapper[4625]: I1202 15:18:22.107458 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29411461-8xbzd_7bb98122-d182-47de-a568-e8c5c90072fa/keystone-cron/0.log" Dec 02 15:18:22 crc kubenswrapper[4625]: I1202 15:18:22.454506 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dcd2m_62d61250-750b-4a2d-b2d6-a5f1b4914da4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:22 crc kubenswrapper[4625]: I1202 15:18:22.513817 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_34ec415c-0f48-4b5b-98f6-6f854c2910ee/kube-state-metrics/0.log" Dec 02 15:18:22 crc kubenswrapper[4625]: I1202 15:18:22.610775 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6f7c78dbd6-lsbdb_f5f7b2e0-20a9-42b2-b323-4c813153f09f/keystone-api/0.log" Dec 02 15:18:23 crc kubenswrapper[4625]: I1202 15:18:23.403249 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-sx98b_4a01556d-36c3-4d01-9c45-faccb3941b62/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:23 crc kubenswrapper[4625]: I1202 15:18:23.711820 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8c746598f-ss7rg_f4477e45-6d29-4717-9168-8cf291295a40/neutron-api/0.log" Dec 02 15:18:23 crc kubenswrapper[4625]: I1202 15:18:23.714234 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8c746598f-ss7rg_f4477e45-6d29-4717-9168-8cf291295a40/neutron-httpd/0.log" Dec 02 15:18:24 crc kubenswrapper[4625]: I1202 15:18:24.556293 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cc85f464-74b0-41f9-9997-21e67c3c7e3a/nova-cell0-conductor-conductor/0.log" Dec 02 15:18:25 crc kubenswrapper[4625]: I1202 15:18:25.442100 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c02f586b-acea-434e-9258-d9cd407b3595/nova-cell1-conductor-conductor/0.log" Dec 02 15:18:25 crc kubenswrapper[4625]: I1202 15:18:25.640014 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1800930c-5ef6-4a3e-8a80-df933d636e5b/nova-api-log/0.log" Dec 02 15:18:25 crc kubenswrapper[4625]: I1202 15:18:25.807234 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ae13d171-d7b4-4f87-b94b-b19de24b35b6/nova-cell1-novncproxy-novncproxy/0.log" Dec 02 15:18:26 crc kubenswrapper[4625]: I1202 15:18:26.209832 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-qxsfq_c531f95a-508b-48ea-bfb7-91659bd6df10/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:26 crc kubenswrapper[4625]: I1202 15:18:26.220106 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1800930c-5ef6-4a3e-8a80-df933d636e5b/nova-api-api/0.log" Dec 02 15:18:27 crc kubenswrapper[4625]: I1202 15:18:27.087632 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0994edfd-9799-4974-8f7e-eb4cf312a370/nova-metadata-log/0.log" Dec 02 15:18:27 crc kubenswrapper[4625]: I1202 15:18:27.467445 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_266c6414-c5b8-4dd2-939d-2386a0756d9c/mysql-bootstrap/0.log" Dec 02 15:18:27 crc kubenswrapper[4625]: I1202 15:18:27.647049 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_266c6414-c5b8-4dd2-939d-2386a0756d9c/mysql-bootstrap/0.log" Dec 02 15:18:27 crc kubenswrapper[4625]: I1202 15:18:27.651850 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6fdf8d29-fbd9-4e2d-8f96-dc4153d0e24a/nova-scheduler-scheduler/0.log" Dec 02 15:18:27 crc kubenswrapper[4625]: I1202 15:18:27.895093 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_266c6414-c5b8-4dd2-939d-2386a0756d9c/galera/0.log" Dec 02 15:18:28 crc kubenswrapper[4625]: I1202 15:18:28.135247 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e108301-d560-49b4-a4b2-a2f45c2fa8fd/mysql-bootstrap/0.log" Dec 02 15:18:28 crc kubenswrapper[4625]: I1202 15:18:28.370455 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e108301-d560-49b4-a4b2-a2f45c2fa8fd/mysql-bootstrap/0.log" Dec 02 15:18:28 crc kubenswrapper[4625]: I1202 15:18:28.418899 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e108301-d560-49b4-a4b2-a2f45c2fa8fd/galera/0.log" Dec 02 15:18:28 crc kubenswrapper[4625]: I1202 15:18:28.673275 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7e342617-f071-4967-a02d-38534c2c7c11/openstackclient/0.log" Dec 02 15:18:28 crc kubenswrapper[4625]: I1202 15:18:28.794163 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5hzbv_3fe58841-9566-4a48-9e44-6709020a943c/ovn-controller/0.log" Dec 02 15:18:29 crc kubenswrapper[4625]: I1202 15:18:29.142544 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ph9m2_2bef51af-8aba-4e71-a607-69b0e7facae6/openstack-network-exporter/0.log" Dec 02 15:18:29 crc kubenswrapper[4625]: I1202 15:18:29.285909 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bqzbz_0be922f6-018c-4504-bc6a-f0c8dd53ce5b/ovsdb-server-init/0.log" Dec 02 15:18:29 crc kubenswrapper[4625]: I1202 15:18:29.458440 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0994edfd-9799-4974-8f7e-eb4cf312a370/nova-metadata-metadata/0.log" Dec 02 15:18:29 crc kubenswrapper[4625]: I1202 15:18:29.998497 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bqzbz_0be922f6-018c-4504-bc6a-f0c8dd53ce5b/ovs-vswitchd/0.log" Dec 02 15:18:30 crc kubenswrapper[4625]: I1202 15:18:30.039509 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bqzbz_0be922f6-018c-4504-bc6a-f0c8dd53ce5b/ovsdb-server-init/0.log" Dec 02 15:18:30 crc kubenswrapper[4625]: I1202 15:18:30.103039 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bqzbz_0be922f6-018c-4504-bc6a-f0c8dd53ce5b/ovsdb-server/0.log" Dec 02 15:18:30 crc kubenswrapper[4625]: I1202 15:18:30.363907 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-s5gxh_9505225f-1412-45d6-96f3-27b3ab5c35c1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:30 crc kubenswrapper[4625]: I1202 15:18:30.513004 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f8bda2bc-c054-4188-ad43-47b49dab4949/openstack-network-exporter/0.log" Dec 02 15:18:30 crc kubenswrapper[4625]: I1202 15:18:30.538135 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f8bda2bc-c054-4188-ad43-47b49dab4949/ovn-northd/0.log" Dec 02 15:18:30 crc kubenswrapper[4625]: I1202 15:18:30.951409 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_31013ff5-b8be-40fb-9e34-5eac74bd1849/openstack-network-exporter/0.log" Dec 02 15:18:30 crc kubenswrapper[4625]: I1202 15:18:30.957740 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_31013ff5-b8be-40fb-9e34-5eac74bd1849/ovsdbserver-nb/0.log" Dec 02 15:18:31 crc kubenswrapper[4625]: I1202 15:18:31.122825 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3295e090-e4c0-4c88-a3aa-9d938e0b541d/openstack-network-exporter/0.log" Dec 02 15:18:31 crc kubenswrapper[4625]: I1202 15:18:31.291854 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3295e090-e4c0-4c88-a3aa-9d938e0b541d/ovsdbserver-sb/0.log" Dec 02 15:18:31 crc kubenswrapper[4625]: I1202 15:18:31.686920 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5855fb4fd-8xvmf_dc3dac2b-e3ca-4fde-b347-598e80af89ce/placement-api/0.log" Dec 02 15:18:31 crc kubenswrapper[4625]: I1202 15:18:31.806371 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_50ba9ca8-e722-4c48-9435-a358d35a893e/setup-container/0.log" Dec 02 15:18:31 crc kubenswrapper[4625]: I1202 15:18:31.838679 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8f8269cf-38ac-4207-be57-909e352cb528/memcached/0.log" Dec 02 15:18:31 crc kubenswrapper[4625]: I1202 15:18:31.858494 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5855fb4fd-8xvmf_dc3dac2b-e3ca-4fde-b347-598e80af89ce/placement-log/0.log" Dec 02 15:18:31 crc kubenswrapper[4625]: I1202 15:18:31.976102 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_50ba9ca8-e722-4c48-9435-a358d35a893e/setup-container/0.log" Dec 02 15:18:32 crc kubenswrapper[4625]: I1202 15:18:32.014345 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_50ba9ca8-e722-4c48-9435-a358d35a893e/rabbitmq/0.log" Dec 02 15:18:32 crc kubenswrapper[4625]: I1202 15:18:32.402905 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5eb1d307-4690-436e-8f82-a27eff014c84/setup-container/0.log" Dec 02 15:18:32 crc kubenswrapper[4625]: I1202 15:18:32.622758 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5eb1d307-4690-436e-8f82-a27eff014c84/setup-container/0.log" Dec 02 15:18:32 crc kubenswrapper[4625]: I1202 15:18:32.756833 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-4mhz4_abdce099-8a70-4557-860e-379c32fd5d6c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:32 crc kubenswrapper[4625]: I1202 15:18:32.762386 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5eb1d307-4690-436e-8f82-a27eff014c84/rabbitmq/0.log" Dec 02 15:18:32 crc kubenswrapper[4625]: I1202 15:18:32.903791 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4d7z6_1062a08b-7d27-49af-bc24-3d8aae739f10/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:33 crc kubenswrapper[4625]: I1202 15:18:33.293269 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zjgs2_d28486df-5cbd-4cf1-ab77-3bb7c4582d36/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:33 crc kubenswrapper[4625]: I1202 15:18:33.322367 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-g8jql_bfdfc7da-d385-4c7c-8e45-fd36703b7fb6/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:33 crc kubenswrapper[4625]: I1202 15:18:33.768821 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-76psc_20d1bcc5-faf7-4265-bdd8-f471a4d449cd/ssh-known-hosts-edpm-deployment/0.log" Dec 02 15:18:33 crc kubenswrapper[4625]: I1202 15:18:33.800551 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6fb4775b59-xb9rg_afdaf455-8ee8-42c2-8086-305834a075a5/proxy-server/0.log" Dec 02 15:18:33 crc kubenswrapper[4625]: I1202 15:18:33.823447 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6fb4775b59-xb9rg_afdaf455-8ee8-42c2-8086-305834a075a5/proxy-httpd/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.011574 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9hhrz_b5858ebe-f677-4a48-b729-a8c4023b346d/swift-ring-rebalance/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.125737 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/account-reaper/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.268775 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/account-auditor/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.307767 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/account-server/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.350837 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/container-auditor/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.374424 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/account-replicator/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.535327 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/container-replicator/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.577868 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/container-server/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.630839 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/container-updater/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.639261 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/object-auditor/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.659175 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/object-expirer/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.818139 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/object-server/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.854370 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/object-replicator/0.log" Dec 02 15:18:34 crc kubenswrapper[4625]: I1202 15:18:34.879724 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/object-updater/0.log" Dec 02 15:18:35 crc kubenswrapper[4625]: I1202 15:18:35.568235 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/swift-recon-cron/0.log" Dec 02 15:18:35 crc kubenswrapper[4625]: I1202 15:18:35.623741 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b2953913-1ab3-4821-ab7d-8a20cb58ad90/rsync/0.log" Dec 02 15:18:35 crc kubenswrapper[4625]: I1202 15:18:35.703968 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hzvz5_01572dfd-9cb1-4c55-90fc-759a859f60e4/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:35 crc kubenswrapper[4625]: I1202 15:18:35.846758 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f72b183c-9a68-408e-b6b0-2accb1e96305/tempest-tests-tempest-tests-runner/0.log" Dec 02 15:18:36 crc kubenswrapper[4625]: I1202 15:18:36.058022 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3042b4b8-7047-404f-b6b2-70c510508bc6/test-operator-logs-container/0.log" Dec 02 15:18:36 crc kubenswrapper[4625]: I1202 15:18:36.115958 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-qmdfm_4b2123a9-3349-49ed-a533-b0550d7babc0/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 02 15:18:49 crc kubenswrapper[4625]: I1202 15:18:49.272092 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:18:49 crc kubenswrapper[4625]: I1202 15:18:49.272730 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:19:12 crc kubenswrapper[4625]: I1202 15:19:12.429142 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/util/0.log" Dec 02 15:19:12 crc kubenswrapper[4625]: I1202 15:19:12.594736 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/pull/0.log" Dec 02 15:19:12 crc kubenswrapper[4625]: I1202 15:19:12.629177 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/util/0.log" Dec 02 15:19:12 crc kubenswrapper[4625]: I1202 15:19:12.704339 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/pull/0.log" Dec 02 15:19:12 crc kubenswrapper[4625]: I1202 15:19:12.950053 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/pull/0.log" Dec 02 15:19:12 crc kubenswrapper[4625]: I1202 15:19:12.976754 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/util/0.log" Dec 02 15:19:12 crc kubenswrapper[4625]: I1202 15:19:12.982730 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_75b10f7e51b6bc00044ce2e5fc3335a595872a43e1f5d171c92c50afd2bsxxk_f7a86a59-0433-4fd8-95b8-f1ca65eeaba8/extract/0.log" Dec 02 15:19:13 crc kubenswrapper[4625]: I1202 15:19:13.226943 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vck47_a1bf70dd-f5d1-45a9-94a9-86fffb0758b2/manager/0.log" Dec 02 15:19:13 crc kubenswrapper[4625]: I1202 15:19:13.483236 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vck47_a1bf70dd-f5d1-45a9-94a9-86fffb0758b2/kube-rbac-proxy/0.log" Dec 02 15:19:13 crc kubenswrapper[4625]: I1202 15:19:13.649869 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55f4dbb9b7-bhlt2_95cd9233-3d9c-45e1-ade0-6753a952b721/kube-rbac-proxy/0.log" Dec 02 15:19:13 crc kubenswrapper[4625]: I1202 15:19:13.802586 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55f4dbb9b7-bhlt2_95cd9233-3d9c-45e1-ade0-6753a952b721/manager/0.log" Dec 02 15:19:13 crc kubenswrapper[4625]: I1202 15:19:13.881063 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-kl65q_d20c6701-017d-4f33-91f0-10199890032f/kube-rbac-proxy/0.log" Dec 02 15:19:13 crc kubenswrapper[4625]: I1202 15:19:13.928694 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-kl65q_d20c6701-017d-4f33-91f0-10199890032f/manager/0.log" Dec 02 15:19:14 crc kubenswrapper[4625]: I1202 15:19:14.189122 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-q854l_15504c82-ed79-4ab3-a157-7493e0b13058/kube-rbac-proxy/0.log" Dec 02 15:19:14 crc kubenswrapper[4625]: I1202 15:19:14.309038 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-q854l_15504c82-ed79-4ab3-a157-7493e0b13058/manager/0.log" Dec 02 15:19:14 crc kubenswrapper[4625]: I1202 15:19:14.532849 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-jhwcz_137e6ec9-76ad-4b65-a788-a8a38f84343f/kube-rbac-proxy/0.log" Dec 02 15:19:14 crc kubenswrapper[4625]: I1202 15:19:14.570669 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-jhwcz_137e6ec9-76ad-4b65-a788-a8a38f84343f/manager/0.log" Dec 02 15:19:14 crc kubenswrapper[4625]: I1202 15:19:14.719118 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-b882s_95a50933-f183-45d5-b8e2-aac85155551e/kube-rbac-proxy/0.log" Dec 02 15:19:14 crc kubenswrapper[4625]: I1202 15:19:14.853637 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-pr84z_73f97b4a-0c9b-4422-a7fd-e2aab20f9825/kube-rbac-proxy/0.log" Dec 02 15:19:14 crc kubenswrapper[4625]: I1202 15:19:14.890169 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-b882s_95a50933-f183-45d5-b8e2-aac85155551e/manager/0.log" Dec 02 15:19:15 crc kubenswrapper[4625]: I1202 15:19:15.312834 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-pr84z_73f97b4a-0c9b-4422-a7fd-e2aab20f9825/manager/0.log" Dec 02 15:19:15 crc kubenswrapper[4625]: I1202 15:19:15.548912 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-qtsvv_a686420b-bad9-418e-b729-96680afd0f07/kube-rbac-proxy/0.log" Dec 02 15:19:15 crc kubenswrapper[4625]: I1202 15:19:15.597610 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-qtsvv_a686420b-bad9-418e-b729-96680afd0f07/manager/0.log" Dec 02 15:19:15 crc kubenswrapper[4625]: I1202 15:19:15.857960 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-k4nlb_0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e/kube-rbac-proxy/0.log" Dec 02 15:19:15 crc kubenswrapper[4625]: I1202 15:19:15.968550 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-k4nlb_0fbe84bd-4dc3-4f2c-b890-16a1b15f4d0e/manager/0.log" Dec 02 15:19:16 crc kubenswrapper[4625]: I1202 15:19:16.033758 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-lwmht_5e58537e-7499-41c1-b154-ff06bd4dd58a/kube-rbac-proxy/0.log" Dec 02 15:19:16 crc kubenswrapper[4625]: I1202 15:19:16.225350 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-lwmht_5e58537e-7499-41c1-b154-ff06bd4dd58a/manager/0.log" Dec 02 15:19:16 crc kubenswrapper[4625]: I1202 15:19:16.235468 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-gr5bt_79d42122-959c-41e1-9c56-58788fd56100/kube-rbac-proxy/0.log" Dec 02 15:19:16 crc kubenswrapper[4625]: I1202 15:19:16.288468 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-gr5bt_79d42122-959c-41e1-9c56-58788fd56100/manager/0.log" Dec 02 15:19:16 crc kubenswrapper[4625]: I1202 15:19:16.606992 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-jwks4_910705f2-ee02-421a-a0eb-eb594d119f9e/manager/0.log" Dec 02 15:19:16 crc kubenswrapper[4625]: I1202 15:19:16.653886 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-zk4xg_cc5f44ae-eba1-40ca-8391-49985c6211bd/kube-rbac-proxy/0.log" Dec 02 15:19:16 crc kubenswrapper[4625]: I1202 15:19:16.664230 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-jwks4_910705f2-ee02-421a-a0eb-eb594d119f9e/kube-rbac-proxy/0.log" Dec 02 15:19:16 crc kubenswrapper[4625]: I1202 15:19:16.857877 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-zk4xg_cc5f44ae-eba1-40ca-8391-49985c6211bd/manager/0.log" Dec 02 15:19:16 crc kubenswrapper[4625]: I1202 15:19:16.892098 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-lg24z_d04e4d3e-b826-40ad-9955-7c7ba1379920/kube-rbac-proxy/0.log" Dec 02 15:19:16 crc kubenswrapper[4625]: I1202 15:19:16.970693 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-lg24z_d04e4d3e-b826-40ad-9955-7c7ba1379920/manager/0.log" Dec 02 15:19:17 crc kubenswrapper[4625]: I1202 15:19:17.139601 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck_a9612490-cbef-4040-a5f5-26737160de83/kube-rbac-proxy/0.log" Dec 02 15:19:17 crc kubenswrapper[4625]: I1202 15:19:17.270280 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4wj9ck_a9612490-cbef-4040-a5f5-26737160de83/manager/0.log" Dec 02 15:19:17 crc kubenswrapper[4625]: I1202 15:19:17.681639 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gdt46_e7071f61-c3c4-4f5a-b3ea-5fc268f55dbd/registry-server/0.log" Dec 02 15:19:17 crc kubenswrapper[4625]: I1202 15:19:17.858133 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-84d58866d9-k5nd2_aebd70d9-f01d-4141-bfc6-972472620c50/operator/0.log" Dec 02 15:19:18 crc kubenswrapper[4625]: I1202 15:19:18.088675 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-jktk5_0f5a3014-4394-4a6f-972e-52f2ef19328f/kube-rbac-proxy/0.log" Dec 02 15:19:18 crc kubenswrapper[4625]: I1202 15:19:18.150831 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-jktk5_0f5a3014-4394-4a6f-972e-52f2ef19328f/manager/0.log" Dec 02 15:19:18 crc kubenswrapper[4625]: I1202 15:19:18.305692 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-qgjn7_4dff1b74-d58f-40b9-a3a6-c1ebdd498690/kube-rbac-proxy/0.log" Dec 02 15:19:18 crc kubenswrapper[4625]: I1202 15:19:18.441563 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58cd586464-xpfd6_e3cfbc8e-665f-4007-a38d-714f53c48923/manager/0.log" Dec 02 15:19:18 crc kubenswrapper[4625]: I1202 15:19:18.466959 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-qgjn7_4dff1b74-d58f-40b9-a3a6-c1ebdd498690/manager/0.log" Dec 02 15:19:18 crc kubenswrapper[4625]: I1202 15:19:18.526068 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7lmwf_38eaf493-09d1-441e-81a9-777174f24006/operator/0.log" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.091144 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-ls4vx_0daed4ec-6cef-4f70-bdf2-27c278868917/kube-rbac-proxy/0.log" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.169468 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-ls4vx_0daed4ec-6cef-4f70-bdf2-27c278868917/manager/0.log" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.272172 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.272254 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.272334 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.273295 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e879b6051207e9e1241754852b2a846deb0a2c61815c504ee18197eab67afcda"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.273373 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://e879b6051207e9e1241754852b2a846deb0a2c61815c504ee18197eab67afcda" gracePeriod=600 Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.649332 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-wqq4b_37a0be8e-736e-486e-a1af-abc65c34c25b/kube-rbac-proxy/0.log" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.690349 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-wqq4b_37a0be8e-736e-486e-a1af-abc65c34c25b/manager/0.log" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.694353 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-mmjnh_6075f378-b13f-422a-a3c0-3301d78d3fa9/manager/0.log" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.712220 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-mmjnh_6075f378-b13f-422a-a3c0-3301d78d3fa9/kube-rbac-proxy/0.log" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.888425 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="e879b6051207e9e1241754852b2a846deb0a2c61815c504ee18197eab67afcda" exitCode=0 Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.888506 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"e879b6051207e9e1241754852b2a846deb0a2c61815c504ee18197eab67afcda"} Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.888554 4625 scope.go:117] "RemoveContainer" containerID="70ed35f7c466ce31e59a7a655253f226ab53cdc0d798e78fd3dcf641e0ec91ca" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.903722 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-wpdzp_db262c5c-48c7-4749-990c-77993791ba47/kube-rbac-proxy/0.log" Dec 02 15:19:19 crc kubenswrapper[4625]: I1202 15:19:19.975156 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-wpdzp_db262c5c-48c7-4749-990c-77993791ba47/manager/0.log" Dec 02 15:19:20 crc kubenswrapper[4625]: I1202 15:19:20.903612 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerStarted","Data":"10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37"} Dec 02 15:19:44 crc kubenswrapper[4625]: I1202 15:19:44.319175 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sjskb_bfa9a143-ca0d-4f32-b9a7-b2acb327bedc/control-plane-machine-set-operator/0.log" Dec 02 15:19:44 crc kubenswrapper[4625]: I1202 15:19:44.437275 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-p4l8q_30f3fae9-f4d5-4f32-9498-5d2a2d801654/kube-rbac-proxy/0.log" Dec 02 15:19:44 crc kubenswrapper[4625]: I1202 15:19:44.536621 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-p4l8q_30f3fae9-f4d5-4f32-9498-5d2a2d801654/machine-api-operator/0.log" Dec 02 15:20:00 crc kubenswrapper[4625]: I1202 15:20:00.038830 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-6svm9_9143a513-bf1b-4452-bf75-f5fea106cda0/cert-manager-controller/0.log" Dec 02 15:20:00 crc kubenswrapper[4625]: I1202 15:20:00.322713 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-wtcwd_1a51a454-f4a2-4fac-9d2b-121515d4dcac/cert-manager-cainjector/0.log" Dec 02 15:20:00 crc kubenswrapper[4625]: I1202 15:20:00.383844 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-cxgzh_aeb44bf0-3409-493e-9666-17615ae63452/cert-manager-webhook/0.log" Dec 02 15:20:17 crc kubenswrapper[4625]: I1202 15:20:17.401297 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-xbmh7_c272e6ee-0c53-4601-bb13-b19116b52d78/nmstate-console-plugin/0.log" Dec 02 15:20:18 crc kubenswrapper[4625]: I1202 15:20:18.049651 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2gbcf_a8f7b0ab-7977-4789-bc67-ddb4be2ee9ab/nmstate-handler/0.log" Dec 02 15:20:18 crc kubenswrapper[4625]: I1202 15:20:18.082793 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-n6vs9_ec26d36a-6a53-45ea-b678-ff1f2f663e4b/kube-rbac-proxy/0.log" Dec 02 15:20:18 crc kubenswrapper[4625]: I1202 15:20:18.168978 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-n6vs9_ec26d36a-6a53-45ea-b678-ff1f2f663e4b/nmstate-metrics/0.log" Dec 02 15:20:18 crc kubenswrapper[4625]: I1202 15:20:18.397726 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-7wnsd_c990b211-885a-4f31-835b-ebc7d42db8dc/nmstate-operator/0.log" Dec 02 15:20:18 crc kubenswrapper[4625]: I1202 15:20:18.497155 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-27pvd_52f9dddb-330a-4c13-9bb7-6a7766b6c4ec/nmstate-webhook/0.log" Dec 02 15:20:36 crc kubenswrapper[4625]: I1202 15:20:36.267211 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-2jqbl_5bf7d269-353b-4ac4-a7e5-02c0cd01d62a/controller/0.log" Dec 02 15:20:36 crc kubenswrapper[4625]: I1202 15:20:36.283348 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-2jqbl_5bf7d269-353b-4ac4-a7e5-02c0cd01d62a/kube-rbac-proxy/0.log" Dec 02 15:20:36 crc kubenswrapper[4625]: I1202 15:20:36.552607 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-frr-files/0.log" Dec 02 15:20:36 crc kubenswrapper[4625]: I1202 15:20:36.739000 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-frr-files/0.log" Dec 02 15:20:36 crc kubenswrapper[4625]: I1202 15:20:36.739133 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-reloader/0.log" Dec 02 15:20:36 crc kubenswrapper[4625]: I1202 15:20:36.838262 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-metrics/0.log" Dec 02 15:20:36 crc kubenswrapper[4625]: I1202 15:20:36.895663 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-reloader/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.026159 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-frr-files/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.057044 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-reloader/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.151569 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-metrics/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.151662 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-metrics/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.414889 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-frr-files/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.440353 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-reloader/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.483566 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/cp-metrics/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.486909 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/controller/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.784691 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/kube-rbac-proxy-frr/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.801520 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/kube-rbac-proxy/0.log" Dec 02 15:20:37 crc kubenswrapper[4625]: I1202 15:20:37.819825 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/frr-metrics/0.log" Dec 02 15:20:38 crc kubenswrapper[4625]: I1202 15:20:38.226508 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-72kpb_aad37202-ae48-4da9-b478-fad57dd764f2/frr-k8s-webhook-server/0.log" Dec 02 15:20:38 crc kubenswrapper[4625]: I1202 15:20:38.246575 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/reloader/0.log" Dec 02 15:20:38 crc kubenswrapper[4625]: I1202 15:20:38.659552 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5f95f47f79-qms5t_3fd6af02-5a39-495f-8365-cd8ec3f3b051/manager/0.log" Dec 02 15:20:39 crc kubenswrapper[4625]: I1202 15:20:39.005014 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m2rh6_2bdff728-939b-414c-a0e9-35520fc54d71/frr/0.log" Dec 02 15:20:39 crc kubenswrapper[4625]: I1202 15:20:39.007495 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-867d4dc474-l4c4v_3980c773-00e8-4019-972e-e0f2f9724185/webhook-server/0.log" Dec 02 15:20:39 crc kubenswrapper[4625]: I1202 15:20:39.081133 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vrxh6_76b2f8e3-7f6f-4592-a2e1-542b76f8872d/kube-rbac-proxy/0.log" Dec 02 15:20:39 crc kubenswrapper[4625]: I1202 15:20:39.530472 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vrxh6_76b2f8e3-7f6f-4592-a2e1-542b76f8872d/speaker/0.log" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.792116 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qgm6c"] Dec 02 15:20:52 crc kubenswrapper[4625]: E1202 15:20:52.795708 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22d0104-d908-4ff8-af1f-39e25dab818c" containerName="container-00" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.795864 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22d0104-d908-4ff8-af1f-39e25dab818c" containerName="container-00" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.796193 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22d0104-d908-4ff8-af1f-39e25dab818c" containerName="container-00" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.798124 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.822751 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgm6c"] Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.878114 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/672295b7-9d7c-4959-9191-6807d9f3e91a-catalog-content\") pod \"redhat-operators-qgm6c\" (UID: \"672295b7-9d7c-4959-9191-6807d9f3e91a\") " pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.878188 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/672295b7-9d7c-4959-9191-6807d9f3e91a-utilities\") pod \"redhat-operators-qgm6c\" (UID: \"672295b7-9d7c-4959-9191-6807d9f3e91a\") " pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.878249 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665w5\" (UniqueName: \"kubernetes.io/projected/672295b7-9d7c-4959-9191-6807d9f3e91a-kube-api-access-665w5\") pod \"redhat-operators-qgm6c\" (UID: \"672295b7-9d7c-4959-9191-6807d9f3e91a\") " pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.980200 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/672295b7-9d7c-4959-9191-6807d9f3e91a-catalog-content\") pod \"redhat-operators-qgm6c\" (UID: \"672295b7-9d7c-4959-9191-6807d9f3e91a\") " pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.980264 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/672295b7-9d7c-4959-9191-6807d9f3e91a-utilities\") pod \"redhat-operators-qgm6c\" (UID: \"672295b7-9d7c-4959-9191-6807d9f3e91a\") " pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.980326 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-665w5\" (UniqueName: \"kubernetes.io/projected/672295b7-9d7c-4959-9191-6807d9f3e91a-kube-api-access-665w5\") pod \"redhat-operators-qgm6c\" (UID: \"672295b7-9d7c-4959-9191-6807d9f3e91a\") " pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.981028 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/672295b7-9d7c-4959-9191-6807d9f3e91a-utilities\") pod \"redhat-operators-qgm6c\" (UID: \"672295b7-9d7c-4959-9191-6807d9f3e91a\") " pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:52 crc kubenswrapper[4625]: I1202 15:20:52.981178 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/672295b7-9d7c-4959-9191-6807d9f3e91a-catalog-content\") pod \"redhat-operators-qgm6c\" (UID: \"672295b7-9d7c-4959-9191-6807d9f3e91a\") " pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:53 crc kubenswrapper[4625]: I1202 15:20:53.002781 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-665w5\" (UniqueName: \"kubernetes.io/projected/672295b7-9d7c-4959-9191-6807d9f3e91a-kube-api-access-665w5\") pod \"redhat-operators-qgm6c\" (UID: \"672295b7-9d7c-4959-9191-6807d9f3e91a\") " pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:53 crc kubenswrapper[4625]: I1202 15:20:53.119583 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:20:53 crc kubenswrapper[4625]: I1202 15:20:53.659074 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgm6c"] Dec 02 15:20:54 crc kubenswrapper[4625]: I1202 15:20:54.118856 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgm6c" event={"ID":"672295b7-9d7c-4959-9191-6807d9f3e91a","Type":"ContainerDied","Data":"43563e74b65ba0d37d9b1a5ce762b0f43687010e26585994c90d9a0f77d2f77a"} Dec 02 15:20:54 crc kubenswrapper[4625]: I1202 15:20:54.120435 4625 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 02 15:20:54 crc kubenswrapper[4625]: I1202 15:20:54.118810 4625 generic.go:334] "Generic (PLEG): container finished" podID="672295b7-9d7c-4959-9191-6807d9f3e91a" containerID="43563e74b65ba0d37d9b1a5ce762b0f43687010e26585994c90d9a0f77d2f77a" exitCode=0 Dec 02 15:20:54 crc kubenswrapper[4625]: I1202 15:20:54.120883 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgm6c" event={"ID":"672295b7-9d7c-4959-9191-6807d9f3e91a","Type":"ContainerStarted","Data":"4a7d4718a5558db6c1095bab328d033f30ebd9f172b086b67a42dad7fdab0ee6"} Dec 02 15:20:55 crc kubenswrapper[4625]: I1202 15:20:55.883535 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/util/0.log" Dec 02 15:20:56 crc kubenswrapper[4625]: I1202 15:20:56.155474 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/pull/0.log" Dec 02 15:20:56 crc kubenswrapper[4625]: I1202 15:20:56.198855 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/util/0.log" Dec 02 15:20:56 crc kubenswrapper[4625]: I1202 15:20:56.284100 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/pull/0.log" Dec 02 15:20:56 crc kubenswrapper[4625]: I1202 15:20:56.559633 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/pull/0.log" Dec 02 15:20:56 crc kubenswrapper[4625]: I1202 15:20:56.636740 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/extract/0.log" Dec 02 15:20:56 crc kubenswrapper[4625]: I1202 15:20:56.641656 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fm46dm_6ba709b5-16eb-458a-a3ca-8d430acaf634/util/0.log" Dec 02 15:20:56 crc kubenswrapper[4625]: I1202 15:20:56.878840 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/util/0.log" Dec 02 15:20:57 crc kubenswrapper[4625]: I1202 15:20:57.148102 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/pull/0.log" Dec 02 15:20:57 crc kubenswrapper[4625]: I1202 15:20:57.163378 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/pull/0.log" Dec 02 15:20:57 crc kubenswrapper[4625]: I1202 15:20:57.167822 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/util/0.log" Dec 02 15:20:57 crc kubenswrapper[4625]: I1202 15:20:57.344206 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/pull/0.log" Dec 02 15:20:57 crc kubenswrapper[4625]: I1202 15:20:57.407825 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/util/0.log" Dec 02 15:20:57 crc kubenswrapper[4625]: I1202 15:20:57.453360 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f835wcqv_40ec79b5-40cb-49e8-b693-c63f4066b8ed/extract/0.log" Dec 02 15:20:57 crc kubenswrapper[4625]: I1202 15:20:57.570868 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-utilities/0.log" Dec 02 15:20:57 crc kubenswrapper[4625]: I1202 15:20:57.949537 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-utilities/0.log" Dec 02 15:20:57 crc kubenswrapper[4625]: I1202 15:20:57.994651 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-content/0.log" Dec 02 15:20:58 crc kubenswrapper[4625]: I1202 15:20:58.004996 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-content/0.log" Dec 02 15:20:58 crc kubenswrapper[4625]: I1202 15:20:58.260908 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-content/0.log" Dec 02 15:20:58 crc kubenswrapper[4625]: I1202 15:20:58.387133 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/extract-utilities/0.log" Dec 02 15:20:58 crc kubenswrapper[4625]: I1202 15:20:58.718972 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-utilities/0.log" Dec 02 15:20:58 crc kubenswrapper[4625]: I1202 15:20:58.998011 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fg2pz_e117ec72-f216-4090-87b4-d645c924c53f/registry-server/0.log" Dec 02 15:20:59 crc kubenswrapper[4625]: I1202 15:20:59.060436 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-content/0.log" Dec 02 15:20:59 crc kubenswrapper[4625]: I1202 15:20:59.061004 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-utilities/0.log" Dec 02 15:20:59 crc kubenswrapper[4625]: I1202 15:20:59.175230 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-content/0.log" Dec 02 15:20:59 crc kubenswrapper[4625]: I1202 15:20:59.376824 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-utilities/0.log" Dec 02 15:20:59 crc kubenswrapper[4625]: I1202 15:20:59.660824 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/extract-content/0.log" Dec 02 15:20:59 crc kubenswrapper[4625]: I1202 15:20:59.980955 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lm5sj_82f63ecf-aa95-429e-a39a-796125dfa29c/marketplace-operator/0.log" Dec 02 15:21:00 crc kubenswrapper[4625]: I1202 15:21:00.164896 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wpzzz_2896c3fb-6a7b-41c8-816f-b4ee6ee231fc/registry-server/0.log" Dec 02 15:21:00 crc kubenswrapper[4625]: I1202 15:21:00.181382 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-utilities/0.log" Dec 02 15:21:00 crc kubenswrapper[4625]: I1202 15:21:00.619858 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-content/0.log" Dec 02 15:21:00 crc kubenswrapper[4625]: I1202 15:21:00.622893 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-utilities/0.log" Dec 02 15:21:00 crc kubenswrapper[4625]: I1202 15:21:00.727611 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-content/0.log" Dec 02 15:21:00 crc kubenswrapper[4625]: I1202 15:21:00.942056 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-content/0.log" Dec 02 15:21:01 crc kubenswrapper[4625]: I1202 15:21:01.038665 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/extract-utilities/0.log" Dec 02 15:21:01 crc kubenswrapper[4625]: I1202 15:21:01.045232 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qgm6c_672295b7-9d7c-4959-9191-6807d9f3e91a/extract-utilities/0.log" Dec 02 15:21:01 crc kubenswrapper[4625]: I1202 15:21:01.262586 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d27sv_1a3b53b2-9707-4f7e-94b7-dd3f7b8082e7/registry-server/0.log" Dec 02 15:21:01 crc kubenswrapper[4625]: I1202 15:21:01.419619 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qgm6c_672295b7-9d7c-4959-9191-6807d9f3e91a/extract-utilities/0.log" Dec 02 15:21:01 crc kubenswrapper[4625]: I1202 15:21:01.733238 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qgm6c_672295b7-9d7c-4959-9191-6807d9f3e91a/extract-utilities/0.log" Dec 02 15:21:01 crc kubenswrapper[4625]: I1202 15:21:01.753412 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-utilities/0.log" Dec 02 15:21:02 crc kubenswrapper[4625]: I1202 15:21:02.033766 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-utilities/0.log" Dec 02 15:21:02 crc kubenswrapper[4625]: I1202 15:21:02.144155 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-content/0.log" Dec 02 15:21:02 crc kubenswrapper[4625]: I1202 15:21:02.216706 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-content/0.log" Dec 02 15:21:02 crc kubenswrapper[4625]: I1202 15:21:02.399803 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-utilities/0.log" Dec 02 15:21:02 crc kubenswrapper[4625]: I1202 15:21:02.493324 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/extract-content/0.log" Dec 02 15:21:03 crc kubenswrapper[4625]: I1202 15:21:03.200874 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sc4nh_435bd873-5e0f-4479-b59b-1fd1f39fd50e/registry-server/0.log" Dec 02 15:21:11 crc kubenswrapper[4625]: I1202 15:21:11.308744 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgm6c" event={"ID":"672295b7-9d7c-4959-9191-6807d9f3e91a","Type":"ContainerStarted","Data":"b439258e17e7b72b7b68b346f4b4dd064f193ad76b2bf4f23323cf45930216ab"} Dec 02 15:21:13 crc kubenswrapper[4625]: I1202 15:21:13.334205 4625 generic.go:334] "Generic (PLEG): container finished" podID="672295b7-9d7c-4959-9191-6807d9f3e91a" containerID="b439258e17e7b72b7b68b346f4b4dd064f193ad76b2bf4f23323cf45930216ab" exitCode=0 Dec 02 15:21:13 crc kubenswrapper[4625]: I1202 15:21:13.334404 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgm6c" event={"ID":"672295b7-9d7c-4959-9191-6807d9f3e91a","Type":"ContainerDied","Data":"b439258e17e7b72b7b68b346f4b4dd064f193ad76b2bf4f23323cf45930216ab"} Dec 02 15:21:15 crc kubenswrapper[4625]: I1202 15:21:15.362890 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgm6c" event={"ID":"672295b7-9d7c-4959-9191-6807d9f3e91a","Type":"ContainerStarted","Data":"b53e44b27a61389587608f4a638d5ac4cd60fc0959c9f2a0fa7ee70814e426ae"} Dec 02 15:21:15 crc kubenswrapper[4625]: I1202 15:21:15.398327 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qgm6c" podStartSLOduration=3.163643185 podStartE2EDuration="23.39825753s" podCreationTimestamp="2025-12-02 15:20:52 +0000 UTC" firstStartedPulling="2025-12-02 15:20:54.120092678 +0000 UTC m=+5810.082269753" lastFinishedPulling="2025-12-02 15:21:14.354707023 +0000 UTC m=+5830.316884098" observedRunningTime="2025-12-02 15:21:15.389950746 +0000 UTC m=+5831.352127821" watchObservedRunningTime="2025-12-02 15:21:15.39825753 +0000 UTC m=+5831.360434605" Dec 02 15:21:23 crc kubenswrapper[4625]: I1202 15:21:23.120156 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:21:23 crc kubenswrapper[4625]: I1202 15:21:23.120911 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:21:23 crc kubenswrapper[4625]: I1202 15:21:23.188905 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:21:23 crc kubenswrapper[4625]: I1202 15:21:23.505225 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qgm6c" Dec 02 15:21:23 crc kubenswrapper[4625]: I1202 15:21:23.817183 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgm6c"] Dec 02 15:21:23 crc kubenswrapper[4625]: I1202 15:21:23.988861 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sc4nh"] Dec 02 15:21:23 crc kubenswrapper[4625]: I1202 15:21:23.991970 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sc4nh" podUID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerName="registry-server" containerID="cri-o://0e7c5d177fbc75af5a85b78edbd3a8ef4d9703f6a788ba13f35c4f786c2c8729" gracePeriod=2 Dec 02 15:21:24 crc kubenswrapper[4625]: I1202 15:21:24.456899 4625 generic.go:334] "Generic (PLEG): container finished" podID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerID="0e7c5d177fbc75af5a85b78edbd3a8ef4d9703f6a788ba13f35c4f786c2c8729" exitCode=0 Dec 02 15:21:24 crc kubenswrapper[4625]: I1202 15:21:24.457681 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc4nh" event={"ID":"435bd873-5e0f-4479-b59b-1fd1f39fd50e","Type":"ContainerDied","Data":"0e7c5d177fbc75af5a85b78edbd3a8ef4d9703f6a788ba13f35c4f786c2c8729"} Dec 02 15:21:25 crc kubenswrapper[4625]: I1202 15:21:25.706061 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 15:21:25 crc kubenswrapper[4625]: I1202 15:21:25.832210 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2hkh\" (UniqueName: \"kubernetes.io/projected/435bd873-5e0f-4479-b59b-1fd1f39fd50e-kube-api-access-n2hkh\") pod \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " Dec 02 15:21:25 crc kubenswrapper[4625]: I1202 15:21:25.832407 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-utilities\") pod \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " Dec 02 15:21:25 crc kubenswrapper[4625]: I1202 15:21:25.832648 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-catalog-content\") pod \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\" (UID: \"435bd873-5e0f-4479-b59b-1fd1f39fd50e\") " Dec 02 15:21:25 crc kubenswrapper[4625]: I1202 15:21:25.835767 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-utilities" (OuterVolumeSpecName: "utilities") pod "435bd873-5e0f-4479-b59b-1fd1f39fd50e" (UID: "435bd873-5e0f-4479-b59b-1fd1f39fd50e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:21:25 crc kubenswrapper[4625]: I1202 15:21:25.869691 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435bd873-5e0f-4479-b59b-1fd1f39fd50e-kube-api-access-n2hkh" (OuterVolumeSpecName: "kube-api-access-n2hkh") pod "435bd873-5e0f-4479-b59b-1fd1f39fd50e" (UID: "435bd873-5e0f-4479-b59b-1fd1f39fd50e"). InnerVolumeSpecName "kube-api-access-n2hkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:21:25 crc kubenswrapper[4625]: I1202 15:21:25.935640 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2hkh\" (UniqueName: \"kubernetes.io/projected/435bd873-5e0f-4479-b59b-1fd1f39fd50e-kube-api-access-n2hkh\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:25 crc kubenswrapper[4625]: I1202 15:21:25.935671 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:26 crc kubenswrapper[4625]: I1202 15:21:26.105048 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "435bd873-5e0f-4479-b59b-1fd1f39fd50e" (UID: "435bd873-5e0f-4479-b59b-1fd1f39fd50e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:21:26 crc kubenswrapper[4625]: I1202 15:21:26.144431 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/435bd873-5e0f-4479-b59b-1fd1f39fd50e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:21:26 crc kubenswrapper[4625]: I1202 15:21:26.524988 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sc4nh" event={"ID":"435bd873-5e0f-4479-b59b-1fd1f39fd50e","Type":"ContainerDied","Data":"744994dee5b3449248f9fa04397448936db218b8faa4cdb6ba68db40f52495ad"} Dec 02 15:21:26 crc kubenswrapper[4625]: I1202 15:21:26.525150 4625 scope.go:117] "RemoveContainer" containerID="0e7c5d177fbc75af5a85b78edbd3a8ef4d9703f6a788ba13f35c4f786c2c8729" Dec 02 15:21:26 crc kubenswrapper[4625]: I1202 15:21:26.525392 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sc4nh" Dec 02 15:21:26 crc kubenswrapper[4625]: I1202 15:21:26.590687 4625 scope.go:117] "RemoveContainer" containerID="1386810dd737be18d1b3e4175f6ee0511de368dbb94887294228e03fc8d7ad1f" Dec 02 15:21:26 crc kubenswrapper[4625]: I1202 15:21:26.603385 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sc4nh"] Dec 02 15:21:26 crc kubenswrapper[4625]: I1202 15:21:26.625985 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sc4nh"] Dec 02 15:21:26 crc kubenswrapper[4625]: I1202 15:21:26.646299 4625 scope.go:117] "RemoveContainer" containerID="3a6f1e71bf37cc79d2e1397745b59b28ef8cf84816eee283f4e65f145228baf0" Dec 02 15:21:26 crc kubenswrapper[4625]: I1202 15:21:26.868179 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" path="/var/lib/kubelet/pods/435bd873-5e0f-4479-b59b-1fd1f39fd50e/volumes" Dec 02 15:21:49 crc kubenswrapper[4625]: I1202 15:21:49.271333 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:21:49 crc kubenswrapper[4625]: I1202 15:21:49.272579 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:22:19 crc kubenswrapper[4625]: I1202 15:22:19.271613 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:22:19 crc kubenswrapper[4625]: I1202 15:22:19.272219 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:22:39 crc kubenswrapper[4625]: I1202 15:22:39.473712 4625 scope.go:117] "RemoveContainer" containerID="c4cbf471f6e07730427d2d2bd835919983a1d91cdbf49e6bd6abb20fc4a7ccdd" Dec 02 15:22:49 crc kubenswrapper[4625]: I1202 15:22:49.271824 4625 patch_prober.go:28] interesting pod/machine-config-daemon-c6d9f container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 02 15:22:49 crc kubenswrapper[4625]: I1202 15:22:49.272401 4625 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 02 15:22:49 crc kubenswrapper[4625]: I1202 15:22:49.272456 4625 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" Dec 02 15:22:49 crc kubenswrapper[4625]: I1202 15:22:49.273444 4625 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37"} pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 02 15:22:49 crc kubenswrapper[4625]: I1202 15:22:49.273507 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" containerName="machine-config-daemon" containerID="cri-o://10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" gracePeriod=600 Dec 02 15:22:49 crc kubenswrapper[4625]: E1202 15:22:49.404081 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:22:50 crc kubenswrapper[4625]: I1202 15:22:50.100392 4625 generic.go:334] "Generic (PLEG): container finished" podID="d911ea35-69e2-4943-999e-389a961ce243" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" exitCode=0 Dec 02 15:22:50 crc kubenswrapper[4625]: I1202 15:22:50.100627 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" event={"ID":"d911ea35-69e2-4943-999e-389a961ce243","Type":"ContainerDied","Data":"10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37"} Dec 02 15:22:50 crc kubenswrapper[4625]: I1202 15:22:50.100738 4625 scope.go:117] "RemoveContainer" containerID="e879b6051207e9e1241754852b2a846deb0a2c61815c504ee18197eab67afcda" Dec 02 15:22:50 crc kubenswrapper[4625]: I1202 15:22:50.102662 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:22:50 crc kubenswrapper[4625]: E1202 15:22:50.102940 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:23:03 crc kubenswrapper[4625]: I1202 15:23:03.856814 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:23:03 crc kubenswrapper[4625]: E1202 15:23:03.857795 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:23:16 crc kubenswrapper[4625]: I1202 15:23:16.857608 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:23:16 crc kubenswrapper[4625]: E1202 15:23:16.858887 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:23:28 crc kubenswrapper[4625]: I1202 15:23:28.872886 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:23:28 crc kubenswrapper[4625]: E1202 15:23:28.873923 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:23:39 crc kubenswrapper[4625]: I1202 15:23:39.556889 4625 scope.go:117] "RemoveContainer" containerID="01af01b483c76b886c4b809c115150cf75b92d66554136b7c44eb2a1daeac2d8" Dec 02 15:23:42 crc kubenswrapper[4625]: I1202 15:23:42.858084 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:23:42 crc kubenswrapper[4625]: E1202 15:23:42.858623 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:23:47 crc kubenswrapper[4625]: I1202 15:23:47.857137 4625 generic.go:334] "Generic (PLEG): container finished" podID="7128569a-269e-4043-8119-7a880bf03aa0" containerID="b3a498a3c61b8e912869d5a113ee7cdd0e5de71dbb4a28bdb6e826b5604398e1" exitCode=0 Dec 02 15:23:47 crc kubenswrapper[4625]: I1202 15:23:47.857237 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7zl6r/must-gather-xcs4w" event={"ID":"7128569a-269e-4043-8119-7a880bf03aa0","Type":"ContainerDied","Data":"b3a498a3c61b8e912869d5a113ee7cdd0e5de71dbb4a28bdb6e826b5604398e1"} Dec 02 15:23:47 crc kubenswrapper[4625]: I1202 15:23:47.859042 4625 scope.go:117] "RemoveContainer" containerID="b3a498a3c61b8e912869d5a113ee7cdd0e5de71dbb4a28bdb6e826b5604398e1" Dec 02 15:23:48 crc kubenswrapper[4625]: I1202 15:23:48.314943 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7zl6r_must-gather-xcs4w_7128569a-269e-4043-8119-7a880bf03aa0/gather/0.log" Dec 02 15:23:56 crc kubenswrapper[4625]: I1202 15:23:56.858145 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:23:56 crc kubenswrapper[4625]: E1202 15:23:56.859410 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:24:02 crc kubenswrapper[4625]: I1202 15:24:02.746078 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7zl6r/must-gather-xcs4w"] Dec 02 15:24:02 crc kubenswrapper[4625]: I1202 15:24:02.746941 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7zl6r/must-gather-xcs4w" podUID="7128569a-269e-4043-8119-7a880bf03aa0" containerName="copy" containerID="cri-o://64d2cd8bcc6890a636a1d64c74eb3000c0dc8bf89c5344c407e1ff9f12aaa0aa" gracePeriod=2 Dec 02 15:24:02 crc kubenswrapper[4625]: I1202 15:24:02.760264 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7zl6r/must-gather-xcs4w"] Dec 02 15:24:03 crc kubenswrapper[4625]: I1202 15:24:03.222822 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7zl6r_must-gather-xcs4w_7128569a-269e-4043-8119-7a880bf03aa0/copy/0.log" Dec 02 15:24:03 crc kubenswrapper[4625]: I1202 15:24:03.230730 4625 generic.go:334] "Generic (PLEG): container finished" podID="7128569a-269e-4043-8119-7a880bf03aa0" containerID="64d2cd8bcc6890a636a1d64c74eb3000c0dc8bf89c5344c407e1ff9f12aaa0aa" exitCode=143 Dec 02 15:24:03 crc kubenswrapper[4625]: I1202 15:24:03.598512 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7zl6r_must-gather-xcs4w_7128569a-269e-4043-8119-7a880bf03aa0/copy/0.log" Dec 02 15:24:03 crc kubenswrapper[4625]: I1202 15:24:03.599365 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/must-gather-xcs4w" Dec 02 15:24:03 crc kubenswrapper[4625]: I1202 15:24:03.677064 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9tsx\" (UniqueName: \"kubernetes.io/projected/7128569a-269e-4043-8119-7a880bf03aa0-kube-api-access-n9tsx\") pod \"7128569a-269e-4043-8119-7a880bf03aa0\" (UID: \"7128569a-269e-4043-8119-7a880bf03aa0\") " Dec 02 15:24:03 crc kubenswrapper[4625]: I1202 15:24:03.677151 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7128569a-269e-4043-8119-7a880bf03aa0-must-gather-output\") pod \"7128569a-269e-4043-8119-7a880bf03aa0\" (UID: \"7128569a-269e-4043-8119-7a880bf03aa0\") " Dec 02 15:24:03 crc kubenswrapper[4625]: I1202 15:24:03.709357 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7128569a-269e-4043-8119-7a880bf03aa0-kube-api-access-n9tsx" (OuterVolumeSpecName: "kube-api-access-n9tsx") pod "7128569a-269e-4043-8119-7a880bf03aa0" (UID: "7128569a-269e-4043-8119-7a880bf03aa0"). InnerVolumeSpecName "kube-api-access-n9tsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:24:04 crc kubenswrapper[4625]: I1202 15:24:04.142663 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9tsx\" (UniqueName: \"kubernetes.io/projected/7128569a-269e-4043-8119-7a880bf03aa0-kube-api-access-n9tsx\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:04 crc kubenswrapper[4625]: I1202 15:24:04.256984 4625 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7zl6r_must-gather-xcs4w_7128569a-269e-4043-8119-7a880bf03aa0/copy/0.log" Dec 02 15:24:04 crc kubenswrapper[4625]: I1202 15:24:04.257832 4625 scope.go:117] "RemoveContainer" containerID="64d2cd8bcc6890a636a1d64c74eb3000c0dc8bf89c5344c407e1ff9f12aaa0aa" Dec 02 15:24:04 crc kubenswrapper[4625]: I1202 15:24:04.257995 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7zl6r/must-gather-xcs4w" Dec 02 15:24:04 crc kubenswrapper[4625]: I1202 15:24:04.290355 4625 scope.go:117] "RemoveContainer" containerID="b3a498a3c61b8e912869d5a113ee7cdd0e5de71dbb4a28bdb6e826b5604398e1" Dec 02 15:24:04 crc kubenswrapper[4625]: I1202 15:24:04.299861 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7128569a-269e-4043-8119-7a880bf03aa0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7128569a-269e-4043-8119-7a880bf03aa0" (UID: "7128569a-269e-4043-8119-7a880bf03aa0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:24:04 crc kubenswrapper[4625]: I1202 15:24:04.351181 4625 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7128569a-269e-4043-8119-7a880bf03aa0-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:04 crc kubenswrapper[4625]: I1202 15:24:04.874625 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7128569a-269e-4043-8119-7a880bf03aa0" path="/var/lib/kubelet/pods/7128569a-269e-4043-8119-7a880bf03aa0/volumes" Dec 02 15:24:09 crc kubenswrapper[4625]: I1202 15:24:09.857655 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:24:09 crc kubenswrapper[4625]: E1202 15:24:09.858160 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.896171 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s9scm"] Dec 02 15:24:13 crc kubenswrapper[4625]: E1202 15:24:13.897519 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7128569a-269e-4043-8119-7a880bf03aa0" containerName="copy" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.897547 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="7128569a-269e-4043-8119-7a880bf03aa0" containerName="copy" Dec 02 15:24:13 crc kubenswrapper[4625]: E1202 15:24:13.897574 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerName="extract-utilities" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.897589 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerName="extract-utilities" Dec 02 15:24:13 crc kubenswrapper[4625]: E1202 15:24:13.897615 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerName="extract-content" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.897627 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerName="extract-content" Dec 02 15:24:13 crc kubenswrapper[4625]: E1202 15:24:13.897665 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerName="registry-server" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.897677 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerName="registry-server" Dec 02 15:24:13 crc kubenswrapper[4625]: E1202 15:24:13.897728 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7128569a-269e-4043-8119-7a880bf03aa0" containerName="gather" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.897739 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="7128569a-269e-4043-8119-7a880bf03aa0" containerName="gather" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.898138 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="435bd873-5e0f-4479-b59b-1fd1f39fd50e" containerName="registry-server" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.898201 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="7128569a-269e-4043-8119-7a880bf03aa0" containerName="gather" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.898229 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="7128569a-269e-4043-8119-7a880bf03aa0" containerName="copy" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.901414 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.914651 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9scm"] Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.952577 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-utilities\") pod \"certified-operators-s9scm\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.952781 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdfwj\" (UniqueName: \"kubernetes.io/projected/37a02553-687f-4c84-aea0-d4dd10bb12fd-kube-api-access-gdfwj\") pod \"certified-operators-s9scm\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:13 crc kubenswrapper[4625]: I1202 15:24:13.952944 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-catalog-content\") pod \"certified-operators-s9scm\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:14 crc kubenswrapper[4625]: I1202 15:24:14.055461 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-utilities\") pod \"certified-operators-s9scm\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:14 crc kubenswrapper[4625]: I1202 15:24:14.055569 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdfwj\" (UniqueName: \"kubernetes.io/projected/37a02553-687f-4c84-aea0-d4dd10bb12fd-kube-api-access-gdfwj\") pod \"certified-operators-s9scm\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:14 crc kubenswrapper[4625]: I1202 15:24:14.055632 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-catalog-content\") pod \"certified-operators-s9scm\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:14 crc kubenswrapper[4625]: I1202 15:24:14.056027 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-utilities\") pod \"certified-operators-s9scm\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:14 crc kubenswrapper[4625]: I1202 15:24:14.056123 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-catalog-content\") pod \"certified-operators-s9scm\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:14 crc kubenswrapper[4625]: I1202 15:24:14.075699 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdfwj\" (UniqueName: \"kubernetes.io/projected/37a02553-687f-4c84-aea0-d4dd10bb12fd-kube-api-access-gdfwj\") pod \"certified-operators-s9scm\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:14 crc kubenswrapper[4625]: I1202 15:24:14.259695 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:14 crc kubenswrapper[4625]: I1202 15:24:14.636103 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9scm"] Dec 02 15:24:14 crc kubenswrapper[4625]: W1202 15:24:14.650976 4625 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a02553_687f_4c84_aea0_d4dd10bb12fd.slice/crio-de2240ffb1891352bfbbd7a2096d40cd4a2165585dba9452a10463fbe83c4093 WatchSource:0}: Error finding container de2240ffb1891352bfbbd7a2096d40cd4a2165585dba9452a10463fbe83c4093: Status 404 returned error can't find the container with id de2240ffb1891352bfbbd7a2096d40cd4a2165585dba9452a10463fbe83c4093 Dec 02 15:24:15 crc kubenswrapper[4625]: I1202 15:24:15.403380 4625 generic.go:334] "Generic (PLEG): container finished" podID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerID="2f50d5680749a05b9f0f48a5096bdd4719772d3b67a8c573cb9a8864b015b4a7" exitCode=0 Dec 02 15:24:15 crc kubenswrapper[4625]: I1202 15:24:15.403447 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9scm" event={"ID":"37a02553-687f-4c84-aea0-d4dd10bb12fd","Type":"ContainerDied","Data":"2f50d5680749a05b9f0f48a5096bdd4719772d3b67a8c573cb9a8864b015b4a7"} Dec 02 15:24:15 crc kubenswrapper[4625]: I1202 15:24:15.403676 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9scm" event={"ID":"37a02553-687f-4c84-aea0-d4dd10bb12fd","Type":"ContainerStarted","Data":"de2240ffb1891352bfbbd7a2096d40cd4a2165585dba9452a10463fbe83c4093"} Dec 02 15:24:17 crc kubenswrapper[4625]: I1202 15:24:17.455209 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9scm" event={"ID":"37a02553-687f-4c84-aea0-d4dd10bb12fd","Type":"ContainerStarted","Data":"9fc05f6e88c2d3e244a549d9cd4329f1c54afd1f78ac6c90659065d004c22eba"} Dec 02 15:24:18 crc kubenswrapper[4625]: I1202 15:24:18.472600 4625 generic.go:334] "Generic (PLEG): container finished" podID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerID="9fc05f6e88c2d3e244a549d9cd4329f1c54afd1f78ac6c90659065d004c22eba" exitCode=0 Dec 02 15:24:18 crc kubenswrapper[4625]: I1202 15:24:18.472813 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9scm" event={"ID":"37a02553-687f-4c84-aea0-d4dd10bb12fd","Type":"ContainerDied","Data":"9fc05f6e88c2d3e244a549d9cd4329f1c54afd1f78ac6c90659065d004c22eba"} Dec 02 15:24:19 crc kubenswrapper[4625]: I1202 15:24:19.489925 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9scm" event={"ID":"37a02553-687f-4c84-aea0-d4dd10bb12fd","Type":"ContainerStarted","Data":"4a354cde3adc2c66917b572dbfa1d0435c3e1fb38e711752d7f8fc50f7481380"} Dec 02 15:24:20 crc kubenswrapper[4625]: I1202 15:24:20.523254 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s9scm" podStartSLOduration=3.832073925 podStartE2EDuration="7.52322571s" podCreationTimestamp="2025-12-02 15:24:13 +0000 UTC" firstStartedPulling="2025-12-02 15:24:15.407757548 +0000 UTC m=+6011.369934623" lastFinishedPulling="2025-12-02 15:24:19.098909333 +0000 UTC m=+6015.061086408" observedRunningTime="2025-12-02 15:24:20.516278592 +0000 UTC m=+6016.478455657" watchObservedRunningTime="2025-12-02 15:24:20.52322571 +0000 UTC m=+6016.485402795" Dec 02 15:24:24 crc kubenswrapper[4625]: I1202 15:24:24.260561 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:24 crc kubenswrapper[4625]: I1202 15:24:24.261555 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:24 crc kubenswrapper[4625]: I1202 15:24:24.339665 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:24 crc kubenswrapper[4625]: I1202 15:24:24.622863 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:24 crc kubenswrapper[4625]: I1202 15:24:24.693425 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9scm"] Dec 02 15:24:24 crc kubenswrapper[4625]: I1202 15:24:24.863391 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:24:24 crc kubenswrapper[4625]: E1202 15:24:24.863872 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:24:26 crc kubenswrapper[4625]: I1202 15:24:26.570202 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s9scm" podUID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerName="registry-server" containerID="cri-o://4a354cde3adc2c66917b572dbfa1d0435c3e1fb38e711752d7f8fc50f7481380" gracePeriod=2 Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.584469 4625 generic.go:334] "Generic (PLEG): container finished" podID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerID="4a354cde3adc2c66917b572dbfa1d0435c3e1fb38e711752d7f8fc50f7481380" exitCode=0 Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.584838 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9scm" event={"ID":"37a02553-687f-4c84-aea0-d4dd10bb12fd","Type":"ContainerDied","Data":"4a354cde3adc2c66917b572dbfa1d0435c3e1fb38e711752d7f8fc50f7481380"} Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.584910 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9scm" event={"ID":"37a02553-687f-4c84-aea0-d4dd10bb12fd","Type":"ContainerDied","Data":"de2240ffb1891352bfbbd7a2096d40cd4a2165585dba9452a10463fbe83c4093"} Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.584970 4625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de2240ffb1891352bfbbd7a2096d40cd4a2165585dba9452a10463fbe83c4093" Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.637558 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.741842 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-catalog-content\") pod \"37a02553-687f-4c84-aea0-d4dd10bb12fd\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.742135 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-utilities\") pod \"37a02553-687f-4c84-aea0-d4dd10bb12fd\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.742171 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdfwj\" (UniqueName: \"kubernetes.io/projected/37a02553-687f-4c84-aea0-d4dd10bb12fd-kube-api-access-gdfwj\") pod \"37a02553-687f-4c84-aea0-d4dd10bb12fd\" (UID: \"37a02553-687f-4c84-aea0-d4dd10bb12fd\") " Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.743219 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-utilities" (OuterVolumeSpecName: "utilities") pod "37a02553-687f-4c84-aea0-d4dd10bb12fd" (UID: "37a02553-687f-4c84-aea0-d4dd10bb12fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.756833 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a02553-687f-4c84-aea0-d4dd10bb12fd-kube-api-access-gdfwj" (OuterVolumeSpecName: "kube-api-access-gdfwj") pod "37a02553-687f-4c84-aea0-d4dd10bb12fd" (UID: "37a02553-687f-4c84-aea0-d4dd10bb12fd"). InnerVolumeSpecName "kube-api-access-gdfwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.844874 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.845225 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdfwj\" (UniqueName: \"kubernetes.io/projected/37a02553-687f-4c84-aea0-d4dd10bb12fd-kube-api-access-gdfwj\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:27 crc kubenswrapper[4625]: I1202 15:24:27.946158 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37a02553-687f-4c84-aea0-d4dd10bb12fd" (UID: "37a02553-687f-4c84-aea0-d4dd10bb12fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:24:28 crc kubenswrapper[4625]: I1202 15:24:28.049881 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a02553-687f-4c84-aea0-d4dd10bb12fd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:24:28 crc kubenswrapper[4625]: I1202 15:24:28.597767 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9scm" Dec 02 15:24:28 crc kubenswrapper[4625]: I1202 15:24:28.653267 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9scm"] Dec 02 15:24:28 crc kubenswrapper[4625]: I1202 15:24:28.667511 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s9scm"] Dec 02 15:24:28 crc kubenswrapper[4625]: I1202 15:24:28.870781 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a02553-687f-4c84-aea0-d4dd10bb12fd" path="/var/lib/kubelet/pods/37a02553-687f-4c84-aea0-d4dd10bb12fd/volumes" Dec 02 15:24:38 crc kubenswrapper[4625]: I1202 15:24:38.856563 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:24:38 crc kubenswrapper[4625]: E1202 15:24:38.857827 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:24:49 crc kubenswrapper[4625]: I1202 15:24:49.856163 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:24:49 crc kubenswrapper[4625]: E1202 15:24:49.857021 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.189791 4625 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xnd8b"] Dec 02 15:24:54 crc kubenswrapper[4625]: E1202 15:24:54.190720 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerName="extract-utilities" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.190734 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerName="extract-utilities" Dec 02 15:24:54 crc kubenswrapper[4625]: E1202 15:24:54.190750 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerName="registry-server" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.190756 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerName="registry-server" Dec 02 15:24:54 crc kubenswrapper[4625]: E1202 15:24:54.190792 4625 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerName="extract-content" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.190802 4625 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerName="extract-content" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.191384 4625 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a02553-687f-4c84-aea0-d4dd10bb12fd" containerName="registry-server" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.195208 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.221075 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnd8b"] Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.293349 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-catalog-content\") pod \"redhat-marketplace-xnd8b\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.293506 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlb2l\" (UniqueName: \"kubernetes.io/projected/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-kube-api-access-vlb2l\") pod \"redhat-marketplace-xnd8b\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.293613 4625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-utilities\") pod \"redhat-marketplace-xnd8b\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.395358 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-catalog-content\") pod \"redhat-marketplace-xnd8b\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.395492 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlb2l\" (UniqueName: \"kubernetes.io/projected/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-kube-api-access-vlb2l\") pod \"redhat-marketplace-xnd8b\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.395834 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-catalog-content\") pod \"redhat-marketplace-xnd8b\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.395985 4625 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-utilities\") pod \"redhat-marketplace-xnd8b\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.396413 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-utilities\") pod \"redhat-marketplace-xnd8b\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.415551 4625 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlb2l\" (UniqueName: \"kubernetes.io/projected/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-kube-api-access-vlb2l\") pod \"redhat-marketplace-xnd8b\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.525964 4625 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.890779 4625 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnd8b"] Dec 02 15:24:54 crc kubenswrapper[4625]: I1202 15:24:54.964055 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnd8b" event={"ID":"303a73ae-9bcb-41fb-9dc0-23f668cad5b6","Type":"ContainerStarted","Data":"1fe307bac5aeabe01620e0a7f281d8a2811fb48e9bd2391cfb904335a36a1b17"} Dec 02 15:24:55 crc kubenswrapper[4625]: I1202 15:24:55.975921 4625 generic.go:334] "Generic (PLEG): container finished" podID="303a73ae-9bcb-41fb-9dc0-23f668cad5b6" containerID="2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf" exitCode=0 Dec 02 15:24:55 crc kubenswrapper[4625]: I1202 15:24:55.976142 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnd8b" event={"ID":"303a73ae-9bcb-41fb-9dc0-23f668cad5b6","Type":"ContainerDied","Data":"2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf"} Dec 02 15:24:58 crc kubenswrapper[4625]: I1202 15:24:57.999357 4625 generic.go:334] "Generic (PLEG): container finished" podID="303a73ae-9bcb-41fb-9dc0-23f668cad5b6" containerID="320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d" exitCode=0 Dec 02 15:24:58 crc kubenswrapper[4625]: I1202 15:24:58.000242 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnd8b" event={"ID":"303a73ae-9bcb-41fb-9dc0-23f668cad5b6","Type":"ContainerDied","Data":"320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d"} Dec 02 15:24:59 crc kubenswrapper[4625]: I1202 15:24:59.017343 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnd8b" event={"ID":"303a73ae-9bcb-41fb-9dc0-23f668cad5b6","Type":"ContainerStarted","Data":"9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2"} Dec 02 15:24:59 crc kubenswrapper[4625]: I1202 15:24:59.039494 4625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xnd8b" podStartSLOduration=2.419218122 podStartE2EDuration="5.039449109s" podCreationTimestamp="2025-12-02 15:24:54 +0000 UTC" firstStartedPulling="2025-12-02 15:24:55.977913075 +0000 UTC m=+6051.940090150" lastFinishedPulling="2025-12-02 15:24:58.598144062 +0000 UTC m=+6054.560321137" observedRunningTime="2025-12-02 15:24:59.037492817 +0000 UTC m=+6054.999669922" watchObservedRunningTime="2025-12-02 15:24:59.039449109 +0000 UTC m=+6055.001626194" Dec 02 15:25:01 crc kubenswrapper[4625]: I1202 15:25:01.856873 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:25:01 crc kubenswrapper[4625]: E1202 15:25:01.857612 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:25:04 crc kubenswrapper[4625]: I1202 15:25:04.527164 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:25:04 crc kubenswrapper[4625]: I1202 15:25:04.527535 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:25:04 crc kubenswrapper[4625]: I1202 15:25:04.643050 4625 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:25:05 crc kubenswrapper[4625]: I1202 15:25:05.170806 4625 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:25:05 crc kubenswrapper[4625]: I1202 15:25:05.264478 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnd8b"] Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.104251 4625 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xnd8b" podUID="303a73ae-9bcb-41fb-9dc0-23f668cad5b6" containerName="registry-server" containerID="cri-o://9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2" gracePeriod=2 Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.550226 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.686140 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-utilities\") pod \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.686327 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlb2l\" (UniqueName: \"kubernetes.io/projected/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-kube-api-access-vlb2l\") pod \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.686412 4625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-catalog-content\") pod \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\" (UID: \"303a73ae-9bcb-41fb-9dc0-23f668cad5b6\") " Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.687091 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-utilities" (OuterVolumeSpecName: "utilities") pod "303a73ae-9bcb-41fb-9dc0-23f668cad5b6" (UID: "303a73ae-9bcb-41fb-9dc0-23f668cad5b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.706528 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-kube-api-access-vlb2l" (OuterVolumeSpecName: "kube-api-access-vlb2l") pod "303a73ae-9bcb-41fb-9dc0-23f668cad5b6" (UID: "303a73ae-9bcb-41fb-9dc0-23f668cad5b6"). InnerVolumeSpecName "kube-api-access-vlb2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.717773 4625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "303a73ae-9bcb-41fb-9dc0-23f668cad5b6" (UID: "303a73ae-9bcb-41fb-9dc0-23f668cad5b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.790680 4625 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-utilities\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.790724 4625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlb2l\" (UniqueName: \"kubernetes.io/projected/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-kube-api-access-vlb2l\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:07 crc kubenswrapper[4625]: I1202 15:25:07.790739 4625 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/303a73ae-9bcb-41fb-9dc0-23f668cad5b6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.120603 4625 generic.go:334] "Generic (PLEG): container finished" podID="303a73ae-9bcb-41fb-9dc0-23f668cad5b6" containerID="9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2" exitCode=0 Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.120672 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnd8b" event={"ID":"303a73ae-9bcb-41fb-9dc0-23f668cad5b6","Type":"ContainerDied","Data":"9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2"} Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.120703 4625 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnd8b" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.120729 4625 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnd8b" event={"ID":"303a73ae-9bcb-41fb-9dc0-23f668cad5b6","Type":"ContainerDied","Data":"1fe307bac5aeabe01620e0a7f281d8a2811fb48e9bd2391cfb904335a36a1b17"} Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.120762 4625 scope.go:117] "RemoveContainer" containerID="9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.158368 4625 scope.go:117] "RemoveContainer" containerID="320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.189930 4625 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnd8b"] Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.196830 4625 scope.go:117] "RemoveContainer" containerID="2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.212791 4625 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnd8b"] Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.262900 4625 scope.go:117] "RemoveContainer" containerID="9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2" Dec 02 15:25:08 crc kubenswrapper[4625]: E1202 15:25:08.264001 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2\": container with ID starting with 9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2 not found: ID does not exist" containerID="9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.264074 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2"} err="failed to get container status \"9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2\": rpc error: code = NotFound desc = could not find container \"9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2\": container with ID starting with 9925942ddf9add8a3b8e736d541f54088da08178ea3e394fafb5fdf1cb17aed2 not found: ID does not exist" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.264139 4625 scope.go:117] "RemoveContainer" containerID="320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d" Dec 02 15:25:08 crc kubenswrapper[4625]: E1202 15:25:08.264858 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d\": container with ID starting with 320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d not found: ID does not exist" containerID="320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.264977 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d"} err="failed to get container status \"320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d\": rpc error: code = NotFound desc = could not find container \"320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d\": container with ID starting with 320a35fcf5f40feec3e420747a244a8fa7b3f9a04dcaee2fe1b8fadd7dde8a8d not found: ID does not exist" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.265058 4625 scope.go:117] "RemoveContainer" containerID="2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf" Dec 02 15:25:08 crc kubenswrapper[4625]: E1202 15:25:08.265496 4625 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf\": container with ID starting with 2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf not found: ID does not exist" containerID="2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.265542 4625 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf"} err="failed to get container status \"2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf\": rpc error: code = NotFound desc = could not find container \"2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf\": container with ID starting with 2abf8a28f7b1c005a53f01f6d3ce030535616fddaee55f1ed6d36460a59285cf not found: ID does not exist" Dec 02 15:25:08 crc kubenswrapper[4625]: I1202 15:25:08.871361 4625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303a73ae-9bcb-41fb-9dc0-23f668cad5b6" path="/var/lib/kubelet/pods/303a73ae-9bcb-41fb-9dc0-23f668cad5b6/volumes" Dec 02 15:25:12 crc kubenswrapper[4625]: I1202 15:25:12.856880 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:25:12 crc kubenswrapper[4625]: E1202 15:25:12.857460 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:25:24 crc kubenswrapper[4625]: I1202 15:25:24.863896 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:25:24 crc kubenswrapper[4625]: E1202 15:25:24.865078 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:25:39 crc kubenswrapper[4625]: I1202 15:25:39.856485 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:25:39 crc kubenswrapper[4625]: E1202 15:25:39.857556 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:25:52 crc kubenswrapper[4625]: I1202 15:25:52.856887 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:25:52 crc kubenswrapper[4625]: E1202 15:25:52.857805 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:26:05 crc kubenswrapper[4625]: I1202 15:26:05.856391 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:26:05 crc kubenswrapper[4625]: E1202 15:26:05.857388 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:26:17 crc kubenswrapper[4625]: I1202 15:26:17.856331 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:26:17 crc kubenswrapper[4625]: E1202 15:26:17.857005 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:26:30 crc kubenswrapper[4625]: I1202 15:26:30.856873 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:26:30 crc kubenswrapper[4625]: E1202 15:26:30.857976 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:26:45 crc kubenswrapper[4625]: I1202 15:26:45.855693 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:26:45 crc kubenswrapper[4625]: E1202 15:26:45.856509 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:26:56 crc kubenswrapper[4625]: I1202 15:26:56.857348 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:26:56 crc kubenswrapper[4625]: E1202 15:26:56.858774 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:27:09 crc kubenswrapper[4625]: I1202 15:27:09.857272 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:27:09 crc kubenswrapper[4625]: E1202 15:27:09.858530 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:27:20 crc kubenswrapper[4625]: I1202 15:27:20.857260 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:27:20 crc kubenswrapper[4625]: E1202 15:27:20.858180 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243" Dec 02 15:27:32 crc kubenswrapper[4625]: I1202 15:27:32.857581 4625 scope.go:117] "RemoveContainer" containerID="10e048a17bf9ee258a5e49e3ab9c0f9dc9a73aa3d171d9b3cbb846dd6988fd37" Dec 02 15:27:32 crc kubenswrapper[4625]: E1202 15:27:32.858579 4625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c6d9f_openshift-machine-config-operator(d911ea35-69e2-4943-999e-389a961ce243)\"" pod="openshift-machine-config-operator/machine-config-daemon-c6d9f" podUID="d911ea35-69e2-4943-999e-389a961ce243"